Build a globally resilient architecture with Azure Load Balancer



Microsoft Azure Blog https://azure.microsoft.com/blog/

en-US
Wed, 02 Nov 2022 18:47:37 Z
build-a-globally-resilient-architecture-with-azure-load-balancer-2
Networking
Build a globally resilient architecture with Azure Load Balancer
With cross-region Load Balancer, customers can distribute traffic across multiple Azure regions with ultra-low latency and high performance. Wed, 02 Nov 2022 09:00:47 Z

Azure Load Balancer’s global tier is a cloud-native global network load balancing solution. With cross-region Load Balancer, customers can distribute traffic across multiple Azure regions with ultra-low latency and high performance. To better understand the use case of Azure’s cross-region Load Balancer, let’s dive deeper into a customer scenario. In this blog, we’ll learn about a customer, their use case, and how Azure Load Balancer came to the rescue.

Who can benefit from Azure Load Balancer?

This example customer is a software vendor in the automotive industry. Their current product offerings are cloud–based software, focused on helping vehicle dealerships manage all aspects of their business including sales leads, vehicles, and customer accounts. While it is a global company, most of its business is done in Europe, the United Kingdom (UK), and Asia Pacific regions. To support its global business, the customer utilizes a wide range of Azure services including virtual machines (VMs), a variety of platform as a service (PaaS) solutions, Load Balancer, and MySQL to help meet an ever-growing demand.

What are the current global load balancing solutions?

The customer is using domain name service (DNS)–based traffic distribution to direct traffic to multiple Azure regions. At each Azure region, they deploy regional Azure Load Balancers to distribute traffic across a set of virtual machines. However, if a region went down, they experienced downtime due to DNS caching. Although minimal, this was not a risk they could continue to take on as their business expanded globally.

What are the problems with the current solutions?

Since the customer’s solution is global, as traffic increased, they noticed high latency when requesting information from their endpoints across regions. For example, users located in Africa noticed high latency when they tried to request information. Often their requests were routed to an Azure region on another continent, which caused the high latency. Answering requests with low latency is a critical business requirement to ensure business continuity. As a result, they needed a solution that withstood regional failover, while simultaneously providing ultra-low latency with high performance.

How did Azure’s cross-region Load Balancer help?

Given that low latency is a requirement for the customer, a global layer 4 load balancer was a perfect solution to the problem. The customer deployed Azure’s cross-region Load Balancer, giving them a single unique globally anycast IP to load balance across their regional offices. With Azure’s cross-region Load Balancer, traffic is distributed to the closest region, ensuring low latency when using the service. For example, if a customer connected from Asia Pacific regions, traffic is automatically routed to the closest region, in this case Southeast Asia. The customer was able to add all their regional load balancers to the backend of the cross-region Load Balancer and thus improved latency without any additional downtime. Before the update was deployed across all regions, the customer verified that their metrics for data path availability and health probe status are 100 percent on both its cross-region Load Balancer and each regional Load Balancer.
 

The figure shows three panels each of an Azure region: East Asia, UK West, and South Africa North, that each contain a virtual network. Within each virtual network, there are 2 virtual machines that are meant to represent the backend resources. In addition, each panel shows a regional Azure Load Balancer that points to each backend resource. This symbolizes that the load balancer distributes traffic to each of the backend resources. Furthermore, above all 3 panels is an Azure cross region Load Balancer, that points to each individual regional load balancer. The Auto DMS’s end user, shown by a user icon, will interact with the cross region load balancer to request information from the backend.

After deploying cross-region Load Balancer, traffic is now distributed with ultra-low latency across regions. Since the cross-region Load Balancer is a network load balancer, only the TCP/UDP headers are quickly inspected instead of the entire packet. In addition, the cross-region Load Balancer will send traffic to the closest participating Azure region to a client. These benefits are seen by the customer who now sees traffic being served with lower latency than before.

Learn More

Visit the Cross-region load balancer overview to learn more about Azure’s cross-region Load Balancer and how it can fit into your architecture.

https://azure.microsoft.com/blog/build-a-globally-resilient-architecture-with-azure-load-balancer-2/#comments https://azure.microsoft.com/blog/build-a-globally-resilient-architecture-with-azure-load-balancer-2/
Mahip Deora


sharing-the-latest-improvements-to-efficiency-in-microsoft-s-datacenters
Big Data
Updates
Sharing the latest improvements to efficiency in Microsoft’s datacenters
Our objective moving forward is to continue providing transparency across the entire datacenter lifecycle about how we infuse principles of reliability, sustainability, and innovation at each step of the datacenter design, construction, and operations process. Wed, 02 Nov 2022 08:00:47 Z

In April, I published a blog that explained how we define and measure energy and water use at our datacenters, and how we are committed to continuous improvements.

Now, in the lead up to COP27, the global climate conference to be held in Egypt, I am pleased to provide a number of updates on how we’re progressing in making our datacenters more efficient across areas such as waste, renewables, and ecosystems. You can also visit Azure Sustainability—Sustainable Technologies | Microsoft Azure to explore this further.

Localized fact sheets in 28 regions

To share important information about the impact of our datacenters regionally with our customers, we have published localized fact sheets in 28 regions across the globe. These fact sheets provide a wide range of information and details about many different aspects of our datacenters and their operations.

A screenshot of a globe.

A review of PUE (Power Usage Effectiveness) and WUE (Water Usage Effectiveness)

A factsheet with data around Microsoft's Ireland datacenter region.

PUE is an industry metric that measures how efficiently a datacenter consumes and uses the energy that powers a datacenter, including the operation of systems like powering, cooling, and operating the servers, data networks and lights. The closer the PUE number is to “1,” the more efficient the use of energy.
While local environment and infrastructure can affect how PUE is calculated, there are also slight variations across providers.

Here is the simplest way to think about PUE:

A picture of Power Usage Effectiveness (PUE) calculation.

WUE is another key metric relating to the efficient and sustainable operations of our datacenters and is a crucial aspect as we work towards our commitment to be water positive by 2030. WUE is calculated by dividing the number of liters of water used for humidification and cooling by the total annual amount of power (measured in kWh) needed to operate our datacenter IT equipment.

A picture of a Water Usage Effectiveness calculation.

In addition to PUE and WUE, below are key highlights across carbon, water, and waste initiatives at our datacenters.

Datacenter efficiency in North and South America

As I illustrated in April, our newest generation of datacenters have a design PUE of 1.12; this includes our Chile datacenter that is under construction. We are constantly focused on improving our energy efficiency, for example in California, our San Jose datacenters will be cooled with an indirect evaporative cooling system using reclaimed water all year and zero fresh water. Because the new datacenter facilities will be cooled with reclaimed water, they will have a WUE of 0.00 L/kWh in terms of freshwater usage.

In addition, as we continue our journey to achieve zero waste by 2030, we are proud of the progress we are making with our Microsoft Circular Centers. These centers sit adjacent to a Microsoft datacenter and process decommissioned cloud servers and hardware. Our teams sort and intelligently channel the components and equipment to optimize, reuse or repurpose.

In October, we launched a Circular Center in Chicago, Illinois that has the potential capacity to process up to 12,000 servers per month for reuse, diverting up to 144,000 servers annually. We plan to open a Circular Center in Washington state early next year and have plans for Circular Centers in Texas, Iowa, and Arizona to further optimize our supply chain and reduce waste.

Furthermore, our team has successfully completed an important water reuse project at one of our datacenters. This treatment facility, the first of its kind in Washington state and over 10 years in the making, will process water for reuse by local industries, including datacenters, decreasing the need for potable water for datacenter cooling.

Innovative solutions in Europe, the Middle East, and Africa

This winter Europeans face the possibility of an energy crisis, and we have made a number of investments in optimizing energy efficiency in our datacenters to ensure that we are operating our facilities as effectively as possible. Datacenters are the backbone of modern society and as such it is important that we continue to provide critical services to the industries that need us most in a way that constantly mitigates energy consumption.

Across our datacenters in EMEA, we have made steady progress across carbon, waste, water, and ecosystems. We are committed to shifting to 100 percent renewable energy supply by 2025, meaning that we will have power purchase agreements for green energy contracted for 100 percent of carbon emitting electricity consumed by all our data centers, buildings, and campuses. This will add additional gigawatts of renewable energy to the grid, increasing energy capacity. With that said we have added more than 5 gigawatts of renewable energy to the grid globally, this has culminated in more than 15 individuals deals in Europe spanning Ireland, Denmark, Sweden, and Spain.

In Finland, we recently announced an important heat reuse project that will take excess heat from our datacenters and transfer that heat to the local districts’ heating systems that can be used for both domestic and commercial purposes.

To reduce waste from our datacenters in EMEA, the Circular Center we opened in Amsterdam in 2020, which has since already delivered an 83 percent reuse of end-of-life datacenter assets and components. This is progress towards our target of 90 percent reuse and recycling of all servers and components for all cloud hardware by 2025. In addition, in January 2022, we opened a Circular Center in Dublin, Ireland, and have plans to open another Circular Center in Sweden to serve the region.

As we continue to seek out efficiencies in our operations, recently we turned to nature for inspiration, to understand how much of the natural ecosystem we could replenish on the site of a datacenter, essentially integrating the datacenter into nature with the goal of renewing and revitalizing the surrounding area so that we can restore and create a pathway to provide regenerative value for the local community and environment. In the Netherlands we have begun construction of a lowland forested area around the datacenter as well as forested wetland. This was done to support the growth of native plants to mirror a healthy, resilient ecosystem and support biodiversity, improve storm water control and prevent erosion.

Rendering of a biomimicry project in the Netherlands showing a concept of using nature to cover the datacenter. Image of an actual datacenter.

Updates in Asia Pacific

Finally, I’d like to highlight some of the sustainability investments we have made across Asia Pacific. In June 2022, we launched our Singapore Circular Center that is capable of processing up to 3,000 servers per month for reuse, or 36,000 servers annually. We have plans to open additional Circular Centers in Australia and South Korea in fiscal year 2025 and beyond. Across our datacenters in APAC, we have formed partnerships with local energy providers for renewable energy that is sourced from wind, solar, and hydro power and we have plans to further these partnerships and investments in renewable energy. In our forthcoming datacenter region in New Zealand, we have signed an agreement that will enable Microsoft to power all of its datacenters with 100 percent renewable energy from the day it opens.

Innovating to design the hyperscale datacenter of the future

What these examples from across our global datacenter portfolio show is our ongoing commitment to make our global Microsoft datacenters more sustainable and efficient, enabling our customers to do more with less.

Our objective moving forward is to continue providing transparency across the entire datacenter lifecycle about how we infuse principles of reliability, sustainability, and innovation at each step of the datacenter design, construction, and operations process.

  • Design: How do we ensure we design for reliability, efficiency, and sustainability, to help reduce our customers’ scope three emissions?
  • Construction: How do we reduce embodied carbon and create a reliable supply chain?
  • Operation: How do we infuse innovative green technologies to decarbonize and operate to the efficient design standards?
  • Decommissioning: How do we recycle and reuse materials in our datacenters?
  • Community: How do we partner with the community and operate as good neighbors?

We have started by sharing datacenter region-specific data around carbon, water, waste, ecosystems, and community development and we will continue to provide updates as Microsoft makes further investments globally.

Learn more

You can learn more about our global datacenter footprint across the 60+ datacenter regions by visiting datacenters.microsoft.com.

https://azure.microsoft.com/blog/sharing-the-latest-improvements-to-efficiency-in-microsoft-s-datacenters/#comments https://azure.microsoft.com/blog/sharing-the-latest-improvements-to-efficiency-in-microsoft-s-datacenters/
Noelle Walsh


secure-your-digital-payment-system-in-the-cloud-with-azure-payment-hsm-now-generally-available
Updates
Migration
Secure your digital payment system in the cloud with Azure Payment HSM—now generally available
We are very excited to announce the general availability of Azure Payment HSM, a BareMetal Infrastructure as a service (IaaS) that enables customers to have native access to payment HSM in the Azure cloud. Tue, 01 Nov 2022 09:00:33 Z

We are very excited to announce the general availability of Azure Payment HSM, a BareMetal Infrastructure as a service (IaaS) that enables customers to have native access to payment HSM in the Azure cloud. With Azure Payment HSM, customers can seamlessly migrate PCI workloads to Azure and meet the most stringent security, audit compliance, low latency, and high-performance requirements needed by the Payment Card Industry (PCI).

Azure Payment HSM service empowers service providers and financial institutions to accelerate their payment system’s digital transformation strategy and adopt the public cloud.

ACI logo “Payment HSM support in the public cloud is one of the most significant hurdles to overcome in moving payment systems to the public cloud.  While there are many different solutions, none can meet the stringent requirements required for a payment system. Microsoft, working with Thales, stepped up to provide a payment HSM solution that could meet the modernization ambitions of ACI Worldwide’s technology platform. It has been a pleasure working with both teams to bring this solution to reality.”

—Timothy White, Chief Architect, Retail Payments and Cloud

Service overview

Azure Payment HSM solution is delivered using Thales payShield 10K Payment HSM, which offers single-tenant HSMs and full remote management capabilities. The service is designed to enable total customer control with strict role and data separation between Microsoft and the customer. HSMs are provisioned and connected directly to the customer’s virtual network, and the HSMs are under the customer’s sole administration control. Once allocated, Microsoft’s administrative access is limited to “Operator” mode and full responsibility for configuration and maintenance of the HSM and software falls upon the customer. When the HSM is no longer required and the device is returned to Microsoft, customer data is erased to ensure  privacy and security. The solution comes with Thales payShield premium package license and enhanced support Plan, with a direct relationship between the customer and Thales.

HSM provisioning service will allocate HSM device to  a customer’s virtual network, customer can fully access and manage HSM remotely with Thales payShield Manager and TMD.

Figure 1: After HSM is provisioned, HSM device is connected directly to a customer’s virtual network with full remote HSM management capabilities through Thales payShield Manager and TMD.

The customer can quickly add more HSM capacity on demand and subscribe to the highest performance level (up to 2500 CPS) for mission-critical payment applications with low latency. The customer can upgrade, or downgrade HSM performance level based on business needs without interruption of HSM production usage. HSMs can be easily provisioned as a pair of devices and configured for high availability.

Azure remains committed to helping customers achieve compliance with the Payment Card Industry’s leading compliance certifications. Azure Payment HSM is certified across stringent security and compliance requirements established by the PCI Security Standards Council (PCI SSC) including PCI DSS, PCI 3DS, and PCI PIN. Thales payShield 10K HSMs are certified to FIPS 140-2 Level 3 and PCI HSM v3. Azure Payment HSM customers can significantly reduce their compliance time, efforts, and cost by leveraging the shared responsibility matrix from Azure’s PCI Attestation of Compliance (AOC).

Typical use cases

Financial institutions and service providers in the payment ecosystem including issuers, service providers, acquirers, processors, and payment networks will benefit from Azure Payment HSM. Azure Payment HSM enables a wide range of use cases, such as payment processing, which allows card and mobile payment authorization and 3D-Secure authentication; payment credential issuing for cards, wearables, and connected devices; securing keys and authentication data and sensitive data protection for point-to-point encryption, security tokenization, and EMV payment tokenization.

Get started

Azure Payment HSM is available at launch in the following regions: East US, West US, South Central US, Central US, North Europe, and West Europe

As Azure Payment HSM is a specialized service, customers should ask their Microsoft account manager and CSA to send the request via email.

Learn more about Azure Payment HSM

To download PCI certification reports and shared responsibility matrices:

https://azure.microsoft.com/blog/secure-your-digital-payment-system-in-the-cloud-with-azure-payment-hsm-now-generally-available/#comments https://azure.microsoft.com/blog/secure-your-digital-payment-system-in-the-cloud-with-azure-payment-hsm-now-generally-available/
May Chen


microsoft-named-a-leader-in-2022-gartner-magic-quadrant-for-cloud-infrastructure-and-platform-services
Announcements
Cloud Strategy
Microsoft named a Leader in 2022 Gartner® Magic Quadrant™ for Cloud Infrastructure and Platform Services
Gartner recently published its 2022 Magic Quadrant™ for Cloud Infrastructure and Platform Services (CIPS) report. For the ninth consecutive year, Microsoft was named a Leader, and for the first time placed furthest on the Completeness of Vision axis. Mon, 31 Oct 2022 09:00:25 Z

Gartner® recently published its 2022 Magic Quadrant™ for Cloud Infrastructure and Platform Services (CIPS) report. For the ninth consecutive year, Microsoft was named a Leader, and for the first time placed furthest on the Completeness of Vision axis.

Magic Quadrant for Cloud Infrastructure and Platform Services

For years, we’ve understood the industry has trusted Gartner Magic Quadrant reports to provide a holistic review of cloud providers’ capabilities.

Today, we face an uncertain global economy, and as customers consider migrating and modernizing their IT environments, they’re turning to the cloud experts they can trust. Our goal is to be that trusted expert with the most comprehensive cloud platform our customers can rely on to manage their infrastructure and modernize their digital estates, freeing them up to focus on what they do best—create, innovate, and differentiate.

We’re honored by this placement in the Gartner report but know there is more to do, particularly as our customers navigate ongoing uncertainties. As they continue to prioritize cloud investments to build resiliency, we’re committed to making continuous improvements and investments to meet their needs.

From cloud to edge: We help customers innovate anywhere

Our long-standing hybrid and multicloud approach is unique in empowering organizations from any industry, wherever they are in their cloud journey, and for whatever use cases they can dream up, to achieve more with Microsoft Azure.

This approach has long enabled our customers to control and manage their sprawling IT assets, ensure consistency, and meet regulatory and sovereignty requirements. Now, as customers leverage the cloud to build new products and offerings that help them stay agile and competitive, Azure and solutions like Azure Arc help organizations innovate anywhere.

Azure Arc operates as a bridge extending across the Azure platform by allowing applications and services the flexibility to run across datacenters, edge, and multicloud environments. Customers across industries including financial services, retail, consumer goods, and manufacturing are realizing the benefits of Azure Arc to address their unique business needs.

Our investments in Azure Arc continue. At Microsoft Build this year, we announced any Cloud Native Computing Foundation (CNCF)-conformant Kubernetes cluster connected through Azure Arc is now a supported deployment target for Azure application services.

In August, we announced the public preview of Microsoft Dev Box, a managed service that enables developers to create on-demand, high-performance, secure, ready-to-code, project-specific workstations in the cloud so they can work and innovate anywhere. And, more recently at Microsoft Ignite, we announced the availability of Arc-enabled SQL Server and new deployment options for Azure Kubernetes Services enabled by Arc, so customers can run containerized apps regardless of their location.

To help our customers optimize their cloud investments, we have pricing benefits and offers, like Azure Hybrid Benefit, providing a way to use existing on-premises Windows Server and SQL Server licenses on the cloud with no additional cost. We also understand customers may need additional help to ensure workloads remain secure and protected with hybrid flexibility as you move.

Earlier this month, we announced the expansion of Azure Hybrid Benefit to include AKS. Now our customers can deploy the Azure Kubernetes Service on Azure Stack HCI or Windows Server in their own datacenters or edge environments at no additional cost. This ensures a consistent, managed Kubernetes experience from cloud to edge for both Windows and Linux containers.

I am always inspired by the ways our customers use our solutions to do more with less, and at the same time, overcome longstanding security and governance challenges.

Performance, scale, and mission-critical capability for all applications and workloads

We continuously invest to make Azure the best place for customers to run their mission-critical workloads, like SAP. Because of offerings like Azure Center for SAP Solutions, an end-to-end solution to deploy and manage SAP workloads on Azure, we’ve become the platform of choice for SAP apps on the cloud.

We’re also making significant investments to support our customers’ largest Windows Server and SQL Server migration and modernization projects, up to 2.5 times more than previous investments1. This will provide even more migration support in two ways: partner assistance with planning and moving workloads, and Azure credits that offset transition costs during the move to Azure Virtual Machines, Azure SQL Managed Instance, and Azure SQL Database.

Global reach and expansion to meet digital sovereignty needs

As the cloud provider with the most datacenter regions—60+ worldwide—we also have a deep commitment to infrastructure expansion for our customers around the world. This year, we launched datacenters in Sweden and Qatar and will launch 10 more regions over the next year.

We also recently launched Microsoft Cloud for Sovereignty for government and public sector customers, designed to meet heightened requirements for data residency, privacy, access control, and operational compliance in cloud and hybrid environments.

Microsoft Cloud and Azure help customers unlock business potential

At Microsoft, we have been through our own digital transformation. We brought products like Microsoft Office to the cloud, and we draw from that experience to empower customers to achieve more through the cloud. We understand the power and promise of technology to help unlock an organization’s potential—for employees, customers, industries, and even society more broadly.

Today Microsoft Azure customers come in all shapes and sizes—from startups to space stations, hybrid to cloud native—and are increasingly capitalizing on the value of the full Microsoft Cloud to enable continuous innovation with integrated solutions.

The National Basketball Association (NBA) is a great example of an organization that chose to migrate its SAP solutions and other resources to Microsoft Azure to improve operations and boost fan engagement. Azure enabled them to spend less time managing technology and focus more on generating fan-centric experiences that bring together business, game, and fan data to enhance the way people can enjoy interacting with the NBA.

Using Azure DevOps and Azure Kubernetes Service, Ernst and Young Global Limited (EY) built more agile practices and shifted into a rolling product-delivery approach for software and services. Now, they’re developing and deploying solutions faster and with more confidence across a wide range of environments.

And global pharmaceutical company Sanofi overcame the limitations of its on-premises infrastructure by adopting a hybrid cloud strategy. They chose Azure as their cloud platform, gaining the speed, agility, and reliability necessary for innovation.

No matter where our customers are in their journey, whether they are migrating, modernizing, or creating new applications in the cloud for their customers, we are here to help them achieve their goals today and empower every organization to build for the future.

Learn more


Gartner, Magic Quadrant for Cloud Infrastructure and Platform Services, 19 October 2022, Raj Bala, Dennis Smith, Kevin Ji, David Wright, and Miguel Angel Borrega.


This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. GARTNER and Magic Quadrant are registered trademarks and service marks of Gartner, Inc. and its affiliates in the United States and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

1 based on project eligibility through the Azure Migration and Modernization Program.

https://azure.microsoft.com/blog/microsoft-named-a-leader-in-2022-gartner-magic-quadrant-for-cloud-infrastructure-and-platform-services/#comments https://azure.microsoft.com/blog/microsoft-named-a-leader-in-2022-gartner-magic-quadrant-for-cloud-infrastructure-and-platform-services/
Kathleen Mitford


accenture-and-microsoft-drive-digital-transformation-with-oneplatform-on-microsoft-energy-data-services-for-osdu
Announcements
Partner
Accenture and Microsoft drive digital transformation with OnePlatform on Microsoft Energy Data Services for OSDU™
Accenture is focused on connecting data to business value and working with Microsoft to deliver a fully integrated approach using the OSDU Data Platform to accelerate digital transformation. Mon, 31 Oct 2022 08:00:24 Z

This post was co-authored by Sacha Abinader, Managing Director, Accenture and Keith Armstron, Senior Manager, Accenture Microsoft Business Group.

Accelerate decision-making and interoperability with the OSDU Data Platform

The OSDU™ Forum is a cross-industry collaboration to develop a common, standards-based, and open platform for the exploration and production (E&P) community with the goal to liberate and enable greater access and insight into your valuable data. The OSDU Data Platform promise is compelling and offers value beyond what can be achieved with in-house solutions through established industry standards and openness to the larger technology ecosystem. Accenture has partnered with Microsoft and collaborated closely with Schlumberger during the Microsoft Energy Data Services preview and development process. In addition to Accenture’s domain expertise and digital integration acumen, the preview experience has allowed Accenture to develop skills and scale a team specific to this offering to enable an operator’s OSDU Data Platform journey at pace. 

As such, the OSDU Data Platform as an industry solution has been a top priority for Accenture with significant investments in skills, people, assets, and our presence and leadership as part of the OSDU Forum.

We are thrilled to be a services partner for Microsoft Energy Data Services. The partnership with Microsoft and Schlumberger in enabling an open platform has been wonderful. As part of preview, Accenture has been driving for interoperability across the technology ecosystem to bridge siloed teams and has partnered with Schlumberger and several ISVs to make this a reality. We are excited to enable operators to create additional value through improved and accelerated decision making and the development of new workflows and analytics.”—Emma Wild, Managing Director, Global OSDU Lead.

Accenture has been actively involved in the OSDU Data Platform initiative for several years. In addition to our commitment to the OSDU Forum, we have developed our own vision and strategy as to how we can support OSDU Data Platform integration into E&P workflows and how we can increase operators’ business capability and data value.

Accenture understands our clients’ challenges and is their partner for complex transformations

Our partners and clients want access to secure, clean, and curated data. To achieve this, they must liberate and migrate their data to the OSDU Data Platform. Our clients are dealing with large and highly complex data sets that have varying quality and formats. They also need to manage their ongoing business using the current and future capabilities of the OSDU Data Platform with its continual improvements of new data types and features. Accenture has planned a strategy that supports the transition from monolithic apps and data to the OSDU Data Platform at pace. Accenture and Microsoft are partners on this transformational journey as seen in Figure 1.

Accenture’s OSDU capabilities and differentiators, Data Migration, Business Continuity, Data to Insights and Scale to business value at pace

Our vision, your value

To be successful we believe there is a need to create and support a solution that provides an end-to-end business capability, focusing on business value and time-to-value acceleration.

Accelerating business value through data by finding, validating and preparing your data, improving data quality, accessing and visualizing, providing a one stop data shop, emerging technologies and analytics and providing enhanced business capabilities

Our approach will quickly prepare and present data to the user via the OSDU Data Platform irrespective of its current functionality and capability. By integrating and consolidating data in a standard format and enabling the interoperability of the platform across ISVs, operators can unlock the milestones to the right of the diagram and deliver the accelerated value they’ve been promised.

We think of it as supporting a data life cycle and journey to mitigate perceived risks due to the evolving nature of the OSDU Data Platform while continually improving your business workflows.

Why choose Microsoft Energy Data Services?

We recognize the complexity and risks involved in the transition and migration to the OSDU Data Platform. While energy companies have always managed E&P risk and uncertainty, there is generally a much lower appetite when it comes to IT and digital platforms. As a result, the industry is increasingly seeking packaged solutions or out-of-the-box delivery structures. This enables them to realize the visions promised by the OSDU Data Platform yet still focus on the “day job” and running their operations and business. These solutions and structures help de-risk the journey and minimize disruption to business continuity. Recognizing this, Microsoft developed an open-packaged solution to offer the OSDU Data Platform as a PaaS through Microsoft Energy Data Services. 

Microsoft Energy Data Services was designed to support the energy industry’s ambition to accelerate innovation, develop enhanced insights to drive operational efficiency, and inform new ways of working and workflows. Microsoft Energy Data Services can accelerate the journey to a cloud-based OSDU Data Platform and thus, the path to value.

Accenture and Microsoft Energy Data Services collaboration

Accenture has helped deploy and test Microsoft Energy Data Services through the preview stages to provide feedback to Microsoft Engineering. Accenture is focused on connecting data to business value and working with Microsoft to deliver a fully integrated approach using the OSDU Data Platform to accelerate digital transformation. Accenture demonstrated this during the preview by deploying the Microsoft Energy Data Services solution, ingesting data with OSDU core service tools and Accenture proprietary tools, and stitching together a data workflow across multiple ISVs to validate the openness of the platform. During this process, Accenture has built a team that can help deploy and scale on Microsoft Energy Data Services.

Microsoft Energy Data Services differentiates itself as it will allow and enable:

  • Integration with virtually any energy dataset, application, or cloud service with built-in tools.
  • Management for compute-intensive workloads at a global scale.
  • Compliance with the OSDU Technical Standard for open source innovation.
  • Ease the deployment of the OSDU Data Platform while providing ongoing platform and management support to align to OSDU Data Platform deployments.
  • Rapid data ingestion for analytics and decision-making.
  • Increase operational efficiency and gain global scalability while reducing operational costs.
  • Comprehensive security and compliance.
  • Ability to easily leverage native Azure and Microsoft solutions.

Microsoft Energy Data Services further builds on and enables the OSDU Data Platform value drivers:

  • The ability to access clean and curated historical data under a single data platform.
  • Open access to innovation and a wider set of technology partners (ISVs).
  • Removes siloes and barriers between disciplines and lays the foundation for digital transformation.

Accenture’s specific capabilities and toolkit

Data on its own is not the answer, and Accenture has been working hard to offer end-to-end services and tools which connect the full enterprise and business. The journey requires the need to deliver clean data to unlock value through data science, deploy, and roll out these solutions across global operations, and importantly, to instill trust from end users and the business to allow the value to be recognized.

Accenture’s OnePlatform and associated offerings that integrates with Microsoft energy data services

Accenture is spearheading the industry adoption of the OSDU Data Platform to enable energy companies to accelerate their digital transformation. One such platform we are developing is the Accenture OnePlatform, as seen in Figures 3 and 4, which is a working solution to address the current issues and challenges and help execute the data to its maximum limit.

Accenture’s OnePlatform Data Workflow that comprises identify data and prepare, ingest data and conditioning, visualize data and interoperability, interpret data and AI and data analysis and insights

Figure 4: Accenture’s OnePlatform Data Workflow.

Accenture OnePlatform is a cloud-agnostic platform and one-stop solution for data extraction, schema mapping, metadata generation, and data ingestion that is operationally efficient. Accenture OnePlatform enables OSDU Data Platform services that are available with just one click without any need for extra plugins or any open source installations. 

Some of the key highlights of Accenture OnePlatform are outlined below:

  • Orchestration of the OSDU Data Platform: Provide end-to-end delivery of business workflows via a single interface.
  • Data extraction: Extracting different data types by using a data type converter such as LAS, SegY, or ResQML.
  • Schema Mapping: Mapping client data with Accenture OnePlatform–compliant data types by using AI/ML models.
  • Metadata Generation: Generating metadata by using AI rule-based approach.
  • Data Ingestion: Ingestion workflow. Running on click solution using python utilities.
  • Data Validation: Validating records using python utilities by adding customized rules.
  • Data Quality: Intelligent way to set up the rules and do the quality checks automatically.
  • Knowledge Graph: Build Accenture OnePlatform-based ontology and give the semantic result to the customer.

In addition, the Accenture OnePlatform can serve as an orchestration tool across multiple SaaS ISV solutions. We know interoperability is a key value driver for choosing OSDU. Accenture has played a major role in ISV’s integration by collaborating with various ISVs and Microsoft for collective purpose of consuming the data available in single data platform. Accenture is working with several leading ISVs for development of their applications to fetch data according to the schemas from the OSDU Data Platform and Microsoft Energy Data Services, offering best-in-class interoperability and the ability to deliver end-to-end business workflows. Microsoft Energy Data Services with Accenture’s support has demonstrated the integration of DELFI with multiple ISV applications, such as Interica and Ikon Science, and we were pleased to demonstrate this at the Schlumberger Digital Forum 2022.

Conclusion

In closing, Accenture is committed to being a leading partner to help operators navigate the uncertainties around OSDU Data Platform implementation, manage the risks of deployment, and realize the full value of their data.

We believe Accenture is best placed to deliver on these commitments and enable your value based on our deep industry expertise, investments in accelerators like the Accenture OnePlatform, 14,000+ dedicated oil and gas skilled global practitioners with 250+ OSDU™-trained professionals, and our extensive ecosystem relationships. We are confident that our capabilities and our partnership with Microsoft are key to helping operators execute and scale their OSDU Data Platform transformation with Microsoft Energy Data Services and the interoperability of the platform.

How to work with Accenture on Microsoft Energy Data Services

Microsoft Energy Data Services is an enterprise-grade, fully managed, OSDU Data Platform for the energy industry that is efficient, standardized, easy to deploy, and scalable for data management—for ingesting, aggregating, storing, searching, and retrieving data. The platform will provide the scale, security, privacy, and compliance expected by our enterprise customers.

Learn more

https://azure.microsoft.com/blog/accenture-and-microsoft-drive-digital-transformation-with-oneplatform-on-microsoft-energy-data-services-for-osdu/#comments https://azure.microsoft.com/blog/accenture-and-microsoft-drive-digital-transformation-with-oneplatform-on-microsoft-energy-data-services-for-osdu/
Kadri Umay


forrester-total-economic-impact-study-azure-arc-delivers-206-percent-roi-over-3-years
Security
Updates
Forrester Total Economic Impact study: Azure Arc delivers 206 percent ROI over 3 years
Azure Arc is a bridge that extends the Azure platform so you can build applications and services with the flexibility to run across datacenters, edge, and multicloud environments. Thu, 27 Oct 2022 08:00:25 Z

Businesses today are building and running cloud-based applications to drive their business forward. As these applications are built they need to take full advantage of the agility, efficiency, and speed of cloud innovation. However, not all applications and infrastructure they run on can physically reside in the public cloud. That’s why 86 percent of enterprises plan to increase investment in hybrid or multicloud environments.

We’re building Azure to meet you where you are, so you can do more with your existing investments. We also want you to be able to stay agile and flexible when extending Azure to your on-premises, multicloud, and edge environments.

Azure Arc delivers on these needs. Azure Arc is a bridge that extends the Azure platform so you can build applications and services with the flexibility to run across datacenters, edge, and multicloud environments.

Azure Arc is a set of technologies from Microsoft that is a bridge that extends the Azure platform so customers can build applications and services with the flexibility to run across datacenters, edge, and multicloud environments.

For the 2022 commissioned study, The Total Economic Impact™ of Microsoft Azure Arc for Security and Governance, Forrester Consulting interviewed four organizations with experience using Azure Arc. These organizations serve global markets in the industries of manufacturing, energy, and financial services. According to the aggregated data, Azure Arc demonstrated:

  • A 206 percent return on investment (ROI) over three years with payback in less than six months.
  • A 30 percent gain in productivity for IT Operations team members.
  • An 80 percent reduction in risk of data breach from unsecured infrastructure.
  • A 15 percent reduction in spending on third-party tools, saving on expenses.

The Forrester study provides a framework for organizations wanting to evaluate the potential financial impact on their organizations of using Azure Arc for infrastructure security and governance. Forrester found that organizations with hybrid or multicloud strategies can realize productivity gains and reduce security risks by using Microsoft Azure Arc to secure and govern non-Azure infrastructure alongside Azure resources.

Productivity gains with Azure Arc’s single-pane view

The organizations in Forrester’s study reported that after implementing Azure Arc, their IT Operations personnel realized a 30 percent gain in productivity from savings in time spent on regular duties such as configuring and updating infrastructure, managing policies and permissions, troubleshooting, and resolving issues, and other tasks that don’t directly drive business. With Azure Arc, IT teams can observe, secure, and govern diverse infrastructure and applications from a single pane of glass in Azure—leveraging Azure services enables them to be more agile, respond more efficiently, and frees time to serve business interests with higher-value tasks.

We’re just making everyone’s lives so much easier so they can do other things. If there is an issue, for example, you don’t have to spend a week troubleshooting.”—Architect, Cloud products, Energy.

Cost savings and streamlined infrastructure through the Azure portal

Most organizations today run a mix of applications in on-premises datacenters, in the cloud, and at the edge. These disparate environments often result in investments in multiple management tools specific to the technology platforms, resulting in tool sprawl and excessive costs.

By moving to a single view of infrastructure and resources in the Azure portal enabled by Azure Arc, organizations could eliminate their legacy management tools, reducing licensing expenditures and eliminating costly on-premises management infrastructure. With Azure’s flexible consumption-based pricing, they are no longer locked into long-term contracts or capacity limits.

The composite organization in the Forrester study saved $900,000 in year three from reduced spending on third-party tools—a 15 percent decrease.

When I do dive in, I actually have a faster understanding of [our infrastructure]. So the benefit to me is that I have greater visibilityI need to ask [the team] fewer questions. The [Azure Arc] dashboard is […] very easy.—VP of IT, Finance.

Microsoft Defender for Cloud and Microsoft Sentinel modernize security operations

Azure Arc helps organizations combat rapidly evolving security threats with increased efficiency by enabling the use of Microsoft security services such as Microsoft Defender for Cloud and Microsoft Sentinel across hybrid and multicloud environments.

Forrester found that the composite organization lowered the risk of a data breach from unsecured infrastructure by 80 percent after adopting Azure Arc and Microsoft security services. After onboarding Azure Arc, the organization uncovered noncompliant assets running on-premises or in edge environments and updated them to the latest security standards. This results in the savings of hundreds of thousands of dollars that would have been spent otherwise on managing breaches.

“With Azure Arc, we gained real insights into our infrastructure, including infrastructure [another cloud provider]. That helped us identify architecture [gaps] as well as controls to improve security compliance. [With Azure Arc], we found that around 20 percent of our infrastructure had been noncompliant.”—Deputy IT Director, Manufacturing.

Learn more

Azure Arc is a bridge that extends the Azure platform to help customers build applications and services with the flexibility to run across datacenters, at the edge, and in multicloud environments. Get started today and do more with your existing investments. We welcome you to try it for free. You can also learn more about how other customers are using Azure Arc to innovate anywhere.

https://azure.microsoft.com/blog/forrester-total-economic-impact-study-azure-arc-delivers-206-percent-roi-over-3-years/#comments https://azure.microsoft.com/blog/forrester-total-economic-impact-study-azure-arc-delivers-206-percent-roi-over-3-years/
Omar Khan


microsoft-cost-management-updates-october-2022
Cloud Strategy
API Management
Microsoft Cost Management updates—October 2022
Get ready to amp up your savings! October is all about cost optimization with Azure savings plans—a more flexible way to save by pre-committing to hourly usage—the GA of the Azure Advisor score, Azure Migrate improvements, automation with Microsoft Syntex and Power Platform, and 9 other new or updated offers to help you save. You’ll also learn how to group related AVD costs with 2 new videos and see 2 doc updates. Wed, 26 Oct 2022 10:00:09 Z

Whether you’re a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you’re spending where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We’re always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Let’s dig into the details.

Introducing Azure savings plans

As a cloud provider, we are committed to helping our customers get the most value out of their cloud investment through a comprehensive set of pricing models, offers and benefits that adapt to customer’s unique needs. Today, we are announcing Azure savings plan. With this new pricing offer, customers will have an easy and flexible way to save up to 65 percent on compute costs, compared to pay-as-you-go pricing, in addition to existing offers in market including Azure Hybrid Benefit and Reservations.

Azure savings plans lower prices on select Azure services with a commitment to spend a fixed hourly amount for one or three years. You choose whether to pay all upfront or monthly at no extra cost. As you use services such as virtual machines (VMs) and container instances across the world, their usage is covered by the plan at reduced prices, helping you get more value from your cloud budget. During the times when usage is above the hourly commitment, you’ll be billed at your regular on-demand rates.

Azure savings plan is available for the following services today:

  • Virtual machines
  • App Service
  • Azure Functions premium plan
  • Container instances
  • Dedicated hosts

To learn more, see Optimize and maximize cloud investment with Azure savings plan for compute.

Group costs by Azure Virtual Desktop host pool

Many organizations use Azure Virtual Desktop to virtualize applications, often as part of their cloud migration strategy. These applications can cover anything from pure virtual machines to SQL databases, web apps, and more. With such a broad set of connected services, you can imagine how difficult it might be to visualize and manage costs. To help streamline this process and deliver a holistic view of costs rolling up to your Azure Virtual Desktop host pools, Cost Management now supports tagging resource dependencies to group them under their logical parent within the cost analysis preview, making it easier than ever to see the cost of your Azure Virtual Desktop workloads.

To get started, simply apply the cm-resource-parent tag to the virtual machines and/or other child resources you want to see rolled up to your host pool. Set the tag value to be the full resource ID of the host pool. Once the tag is applied, all new usage data will start to be grouped under the parent resource.

Screenshot of the cost analysis preview showing VMs and disks grouped under an Azure Virtual Desktop host pool.

For a guided walkthrough, check out the following videos:

To learn more, see Group costs by host pool with Cost Management now in Public Preview for Azure Virtual Desktop. To learn more about the cm-resource-parent tag and how to group resources of any type, see Group related resources in the cost analysis preview.

Azure Advisor score now generally available

Azure Advisor score offers you a way to prioritize the most impactful Advisor recommendations to optimize your deployments using the Azure Well-Architected Framework. Advisor displays your category scores and your overall Advisor score as percentages. A score of 100 percent in any category means all your resources, assessed by Advisor, follow the best practices that Advisor recommends. On the other end of the spectrum, a score of 0 percent means that none of your resources, assessed by Advisor, follow Advisor recommendations.

Advisor score now supports the ability to report on specific workloads using resource tag filters in addition to subscriptions. For example, you can now omit non-production resources from the score calculation. You can also track your progress over time to understand whether you are consistently maintaining healthy Azure deployments.

To learn more, see Optimize Azure workloads by using Advisor Score.

Help shape the future of cost management for cloud services

Are you responsible for managing purchases, cost, and commerce for your cloud services and SaaS (software as a service) products? Do you perform tasks such as acquisition, account management, cost management, billing, and cost optimization for those services? Do your job responsibilities cover scenarios such as understanding cloud solution spending, discovering resources/services needed, acquiring licenses/subscriptions, monitoring spending over time, analyzing resource utilization, updating licenses/subscriptions, and paying invoices?

If so, we are interested in having an hour-long conversation with you. Please send an email to CE_UXR@microsoft.com to highlight your interest and we will get back to you.

Cost optimization using Azure Migrate

During Microsoft Ignite, we highlighted our continued commitment to cost optimization through support for SQL Server assessments, prior to migration and modernization using Azure Migrate. Customers can now perform unified, at-scale, agentless discovery and assessment of SQL Servers on Microsoft Hyper-V, bare-metal servers, and infrastructure as a service (IaaS) of other public clouds, such as AWS EC2, in addition to VMware environments. The capability will allow customers to analyze existing configurations, performance, and feature compatibility to help with right-sizing and estimating cost. It will also check on readiness and blockers for migrating to Azure SQL Managed instance, SQL Server on Azure virtual machine, and Azure SQL Database. All this information can also be presented in a single coherent report for easy consumption while reducing cost for customers.

Please see our tech community blog for more details. The blog presents a step-by-step procedure to get started, followed by details on scaling and support. Post-assessment options and more details on related topics are covered as well.

Drive efficiency through automation and AI

This year at Microsoft Ignite we explore how organizations can activate AI and automation directly in their business workflows and empower developers to use those same intelligent building blocks to deliver their own differentiated experiences.

The global pandemic has created unprecedented levels of uncertainty, as well as the need to sense and reshape our physical and digital environments, sometimes in completely new ways. Leaders across industries recognize innovation as the only path forward. Critically, we’ve seen a shift from “innovation for innovation’s sake” toward a desire to lower operating costs, anticipate trends, reduce carbon footprints, and improve customer and employee experiences. We’re calling this commitment to innovation “digital perseverance.”

Read the full blog post to learn about automation opportunities through Microsoft Syntex and Power Platform.

What’s new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what’s coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Of course, that’s not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it’s in the full Azure portal or Microsoft 365 admin center. We’re eager to hear your thoughts and understand what you’d like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money in the Microsoft Cloud

New and updated general availability offers:

New previews:

New videos and learning opportunities

If you manage related resources and are looking for a simpler way to view costs across resources, you’ll want to check out these new videos:

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you’d like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are two documentation updates you might be interested in if you use reservations or are interested in more flexible ways to save money in Azure:

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!

Join the Microsoft Cost Management team

Are you excited about helping customers and partners better manage and optimize costs? We’re looking for passionate, dedicated, and exceptional people to help build best in class cloud platforms and experiences to enable exactly that. If you have experience with big data infrastructure, reliable and scalable APIs, or rich and engaging user experiences, you’ll find no better challenge than serving every Microsoft customer and partner in one of the most critical areas for driving cloud success.

Watch the video below to learn more about the Microsoft Cost Management team:

Join the Commerce Platform and Experiences (CPX) Microsoft Team.

Join our team.

What’s next?

These are just a few of the big updates from last month. Don’t forget to check out the previous Microsoft Cost Management updates. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.

https://azure.microsoft.com/blog/microsoft-cost-management-updates-october-2022/#comments https://azure.microsoft.com/blog/microsoft-cost-management-updates-october-2022/
Michael Flanakin


introducing-vision-studio-a-uibased-demo-interface-for-computer-vision
Updates
Artificial Intelligence
Introducing Vision Studio, a UI-based demo interface for Computer Vision
Are you looking to improve the analysis and management of images and videos? The Computer Vision API provides access to advanced algorithms for processing media and returning information. Wed, 26 Oct 2022 08:00:08 Z

Are you looking to improve the analysis and management of images and videos? The Computer Vision API provides access to advanced algorithms for processing media and returning information. By uploading a media asset or specifying a media asset’s URL, Azure’s Computer Vision algorithms can analyze visual content in different ways based on inputs and user choices, tailored to your business.

Want to try out this service with samples that return data in a quick, straightforward manner, without technical support? We are happy to introduce Vision Studio in preview, a platform of UI-based tools that lets you explore, demo and evaluate features from Computer Vision, regardless of your coding experience. You can start experimenting with the services and learning what they offer, then when ready to deploy, use the available client libraries and REST APIs to get started embedding these services into your own applications.

Overview of Vision Studio

Image displaying the Vision Studio interface with multiple services available to try. The interface includes multiple tabs, which indicate additional product demos to experience.

Each of the Computer Vision features has one or more try-it-out experiences in Vision Studio. To use your own images in Vision Studio, you’ll need an Azure subscription and a resource for Cognitive Services for authentication. Otherwise, you can try Vision Studio without logging in, using our provided set of sample images. These experiences help you quickly test the features using a no-code approach that provides JSON and text responses. In Vision Studio, you can try out the following services:

What’s new to try in Vision Studio

Optical Character Recognition (OCR)

The optical character recognition (OCR) service allows you to extract printed or handwritten text from images, such as photos of street signs and products, as well as from documents—invoices, bills, financial reports, articles, and more. Try it out in Vision Studio using your own images to extract text.

Spatial Analysis

The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Try it out in Vision Studio using samples we provide, to see how spatial analysis will improve retail operations.

Face

The Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy. Apply for access to the Face API service to try out identity recognition and verification in Vision Studio.

Image Analysis

The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions to improve accessibility. Try it out in Vision Studio using your own images to accurately identify objects, moderate content and caption images.

Image displaying the Object Detection service demo. The image shows three people sitting at a table, and on the right, a list of the things detected in the image including people and a table.

Responsible AI in Vision

We offer guidance for the responsible use of these capabilities based on Microsoft AI’s principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and human accountability. The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date.  Learn more about Responsible AI in Vision

Next steps

https://azure.microsoft.com/blog/introducing-vision-studio-a-uibased-demo-interface-for-computer-vision/#comments https://azure.microsoft.com/blog/introducing-vision-studio-a-uibased-demo-interface-for-computer-vision/
Kate Browne


image-analysis-40-with-new-api-endpoint-and-ocr-model-in-preview
Artificial Intelligence
API Management
Image Analysis 4.0 with new API endpoint and OCR model in preview
We are thrilled to announce the preview release of Computer Vision Image Analysis 4.0 which combines existing and new visual features such as read optical character recognition (OCR), captioning, image classification and tagging, object detection, people detection, and smart cropping into one API. Tue, 25 Oct 2022 15:10:01 Z

Enterprises and hobbyists alike have been using Azure Computer Vision’s Image Analysis API to garner various insights from their images. These insights help power scenarios such as digital asset management, search engine optimization (SEO), image content moderation, and alt text for accessibility among others. 

Newly improved features including read (OCR)

We are thrilled to announce the preview release of Computer Vision Image Analysis 4.0 which combines existing and new visual features such as read optical character recognition (OCR), captioning, image classification and tagging, object detection, people detection, and smart cropping into one API. One call is all it takes to run all these features on an image. 

The OCR feature integrates more deeply with the Computer Vision service and includes performance improvements that are optimized for image scenarios that make OCR easy to use for user interfaces and near real-time experiences. Read now supports 164 languages including Cyrillic, Arabic, and Hindi.

On the left is a picture of a road sign. On the right is an image diplahying the plain text from the road sign, extracted using Optimal Character Recognition (OCR) technology

Tested at scale and ready for deployment 

Microsoft’s own products from PowerPoint, Designer, Word, Outlook, Edge, and LinkedIn are using Vision APIs to power design suggestions, alt text for accessibility, SEO, document processing, and content moderation. 

You can get started with the preview by trying out the visual features with your images on Vision Studio. Upgrading from a previous version of the Computer Vision Image Analysis API to V4.0 is simple with these instructions.

We will continue to release breakthrough vision AI through this new API over the coming months, including capabilities powered by the Florence foundation model featured in this year’s premiere computer vision conference keynote at CVPR

Picture of a cat. The cat is highlighted with a box to demonstrate object detection technology, and a small box next to the cat displays “cat” with a confidence score of 91.10%

Additional Computer Vision services

Spatial Analysis is also in preview. You can use the spatial analysis feature to create apps that can count people in a room, understand dwell times in front of a retail display, and determine wait times in lines. Build solutions that enable occupancy management and social distancing, optimize in-store and office layouts, and accelerate the checkout process. By processing video streams from physical spaces, you’re able to learn how people use them and maximize the space’s value to your organization.

The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy. Face service access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the Face Recognition intake form to apply for access. For more information, see the Face limited access page.

Computer Vision and Responsible AI

We are excited to see how our customers use Computer Vision’s Image Analysis API with these new and updated features. Our technology advancements are also guided by Microsoft’s Responsible AI process, and our principles of fairness, inclusiveness, reliability and safety, transparency, privacy and security, and accountability. We put these ethical standards into practice through the Office of Responsible AI (ORA)—which sets our rules and governance processes, the AI Ethics and Effects in Engineering and Research (Aether) Committee—which advises our leadership on the challenges and opportunities presented by AI innovations, and Responsible AI Strategy in Engineering (RAISE)—a team that enables the implementation of Microsoft Responsible AI rules across engineering groups.

Get started

Start improving how you analyze images with Image Analysis 4.0 with a unified API endpoint and a new OCR Model. 

https://azure.microsoft.com/blog/image-analysis-40-with-new-api-endpoint-and-ocr-model-in-preview/#comments https://azure.microsoft.com/blog/image-analysis-40-with-new-api-endpoint-and-ocr-model-in-preview/
Andy Beatman


azure-scales-530b-parameter-gpt3-model-with-nvidia-nemo-megatron
Updates
Artificial Intelligence
Partner
Azure Scales 530B Parameter GPT-3 Model with NVIDIA NeMo Megatron
Combining NVIDIA NeMo Megatron with our Azure AI infrastructure offers a powerful platform that anyone can spin up in minutes without having to incur the costs and burden of managing their own on-premises infrastructure. And of course, we have taken our benchmarking of the new framework to a new level, to truly show the power of the Azure infrastructure. Mon, 24 Oct 2022 09:00:37 Z

This post was co-authored by Hugo Affaticati, Technical Program Manager, Microsoft Azure HPC + AI, and Jon Shelley, Principal TPM Manager, Microsoft Azure HPC + AI.

Natural language processing (NLP), automated speech recognition (ASR), and text-to-speech (TTS) applications are becoming increasingly common in today’s world. Most companies have leveraged these technologies to create chatbots for managing customer questions and complaints, streamlining operations, and removing some of the heavy cost burden that comes with headcount. But what you may not realize is they’re also being used internally to reduce risk and identify fraudulent behavior, reduce customer complaints, increase automation, and analyze customer sentiment. It’s prevalent in most places, but especially in industries such as healthcare, finance, retail, and telecommunications.

NVIDIA recently released the latest version of the NVIDIA NeMo Megatron framework, which is now in open beta. This framework can be used to build and deploy large language models (LLMs) with natural language understanding (NLU).

Combining NVIDIA NeMo Megatron with our Azure AI infrastructure offers a powerful platform that anyone can spin up in minutes without having to incur the costs and burden of managing their own on-premises infrastructure. And of course, we have taken our benchmarking of the new framework to a new level, to truly show the power of the Azure infrastructure.

Reaching new milestones with 530B parameters

We used Azure NDm A100 v4-series virtual machines to run the GPT-3 model’s new NVIDIA NeMo Megatron framework and test the limits of this series. NDm A100 v4 virtual machines are Azure’s flagship GPU offerings for AI and deep learning powered by NVIDIA A100 80GB Tensor Core GPUs. These instances have the most GPU memory capacity and bandwidth, backed by NVIDIA InfiniBand HDR connections to support scaling up and out. Ultimately, we ran a 530B-parameter benchmark on 175 virtual machines, resulting in a training time per step of as low as 55.7 seconds (figure1). This benchmark measures the compute efficiency and how it scales by measuring the time taken per step to train the model after steady state is reached, with a mini-batch size of one. Such outstanding speed would not have been possible without InfiniBand HDR providing excellent communication between nodes without increased latency.

The graph shows Azure’s performance results on the GPT-3 530 billion-parameter model with NVIDIA NeMo Megatron. The Training time per step decreases almost linearly from 88.2 seconds to 55.8 seconds when the number of nodes increases from 105 to 175.
Figure 1: Training time per step on the 530B-parameter benchmark from 105 to 175 virtual machines.

These results highlight an almost linear speed increase, guaranteeing better performance for a higher number of nodes—paramount for heavy or time-sensitive workloads. As shown by these runs with billions of parameters, customers can rest assured that Azure’s infrastructure can handle even the most difficult and complex workloads, on demand.

“Speed and scale are both key to developing large language models, and the latest release of the NVIDIA NeMo Megatron framework introduces new techniques to deliver 30 percent faster training for LLMs,” said Paresh Kharya, senior director of accelerated computing at NVIDIA. “Microsoft’s testing with NeMo Megatron 530B also shows that Azure NDm A100 v4 instances powered by NVIDIA A100 Tensor Core GPUs and NVIDIA InfiniBand networking provide a compelling option for achieving linear training speedups at massive scale.”

Showcasing Azure AI capabilities—now and in the future

Azure’s commitment is to make AI and HPC accessible to everyone. It includes, but is not limited to, providing the best AI infrastructure that scales from the smallest use cases to the heaviest workloads. As we continue to innovate to build the best platform for your AI workloads, our promise to you is to use the latest benchmarks to test our AI capabilities. These results help drive our own innovation and showcase that there is no limit to what you can do. For all your AI computing needs, Azure has you covered.

Learn more

To learn more about the results or how to recreate them, please see the following links.

https://azure.microsoft.com/blog/azure-scales-530b-parameter-gpt3-model-with-nvidia-nemo-megatron/#comments https://azure.microsoft.com/blog/azure-scales-530b-parameter-gpt3-model-with-nvidia-nemo-megatron/
Rachel Pruitt