Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Unlocking the trillion-dollar potential of generative AI

Generative AI is poised to unlock trillions in annual economic value across industries. This rapidly evolving field is changing the way we approach everything from content creation to software development, promising never-before-seen efficiency and productivity gains.

In this session, experts from Amazon Web Services (AWS) and QuantumBlack, AI by McKinsey, discuss the drivers fueling the massive potential impact of generative AI. Plus, they look at key industries set to capture the largest share of this value and practical strategies for effectively upskilling their workforces to take advantage of these productivity gains. 

Watch this session to:

  • Explore generative AI’s economic impact
  • Understand workforce upskilling needs
  • Integrate generative AI responsibly
  • Establish an AI-ready business model

Learn how to seamlessly integrate generative AI into your organization’s workflows while fostering a skilled and adaptable workforce. Register now to learn how to unlock the trillion-dollar potential of generative AI.

Register here for free.

Optimizing the supply chain with a data lakehouse

When a commercial ship travels from the port of Ras Tanura in Saudi Arabia to Tokyo Bay, it’s not only carrying cargo; it’s also transporting millions of data points across a wide array of partners and complex technology systems.

Consider, for example, Maersk. The global shipping container and logistics company has more than 100,000 employees, offices in 120 countries, and operates about 800 container ships that can each hold 18,000 tractor-trailer containers. From manufacture to delivery, the items within these containers carry hundreds or thousands of data points, highlighting the amount of supply chain data organizations manage on a daily basis.

Until recently, access to the bulk of an organizations’ supply chain data has been limited to specialists, distributed across myriad data systems. Constrained by traditional data warehouse limitations, maintaining the data requires considerable engineering effort; heavy oversight, and substantial financial commitment. Today, a huge amount of data—generated by an increasingly digital supply chain—languishes in data lakes without ever being made available to the business.

A 2023 Boston Consulting Group survey notes that 56% of managers say although investment in modernizing data architectures continues, managing data operating costs remains a major pain point. The consultancy also expects data deluge issues are likely to worsen as the volume of data generated grows at a rate of 21% from 2021 to 2024, to 149 zettabytes globally.

“Data is everywhere,” says Mark Sear, director of AI, data, and integration at Maersk. “Just consider the life of a product and what goes into transporting a computer mouse from China to the United Kingdom. You have to work out how you get it from the factory to the port, the port to the next port, the port to the warehouse, and the warehouse to the consumer. There are vast amounts of data points throughout that journey.”

Sear says organizations that manage to integrate these rich sets of data are poised to reap valuable business benefits. “Every single data point is an opportunity for improvement—to improve profitability, knowledge, our ability to price correctly, our ability to staff correctly, and to satisfy the customer,” he says.

Organizations like Maersk are increasingly turning to a data lakehouse architecture. By combining the cost-effective scale of a data lake with the capability and performance of a data warehouse, a data lakehouse promises to help companies unify disparate supply chain data and provide a larger group of users with access to data, including structured, semi-structured, and unstructured data. Building analytics on top of the lakehouse not only allows this new architectural approach to advance supply chain efficiency with better performance and governance, but it can also support easy and immediate data analysis and help reduce operational costs.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

State Actor Made Three Attempts to Breach B.C. Government Networks

British Columbia Cyberattack

A state or state-sponsored actor orchestrated the "sophisticated" cyberattacks against the British Columbia government networks, revealed the head of B.C.’s public service on Friday. Shannon Salter, deputy minister to the premier, disclosed to the press that the threat actor made three separate attempts over the past month to breach government systems and that the government was aware of the breach, at the time, before finally making it public on May 8. Premier David Eby first announced that multiple cybersecurity incidents were observed on government networks on Wednesday, adding that the Canadian Centre for Cyber Security (CCCS) and other agencies were involved in the investigation. Salter in her Friday technical briefing refrained from confirming if the hack was related to last month’s security breach of Microsoft’s systems, which was attributed to Russian state-backed hackers and resulted in the disclosure of email correspondence between U.S. government agencies. However, she reiterated Eby's comments that there's no evidence suggesting sensitive personal information was compromised.

British Columbia Cyberattacks' Timeline

The B.C. government first detected a potential cyberattack on April 10. Government security experts initiated an investigation and confirmed the cyberattack on April 11. The incident was then reported to the Canadian Centre for Cyber Security, a federal agency, which engaged Microsoft’s Diagnostics and Recovery Toolset (DaRT) due to the sophistication of the attack, according to Salter. Premier David Eby was briefed about the cyberattack on April 17. On April 29, government cybersecurity experts discovered evidence of another hacking attempt by the same “threat actor,” Salter said. The same day, provincial employees were instructed to immediately change their passwords to 14 characters long. B.C.’s Office of the Chief Information Officer (OCIO) described it as part of the government's routine security updates. Considering the ongoing nature of the investigation, the OCIO did not confirm if the password reset was actually linked to the British Columbia  government cyberattack but said, "Our office has been in contact with government about these incidents, and that they have committed to keeping us informed as more information and analysis becomes available."

Another cyberattack was identified on May 6, with Salter saying the same threat actor was responsible for all three incidents.

The cyberattacks were not disclosed to the public until Wednesday late evening when people were busy watching an ice hockey game, prompting accusations from B.C. United MLAs that the government was attempting to conceal the attack.

“How much sensitive personal information was compromised, and why did the premier wait eight days to issue a discreet statement during a Canucks game to disclose this very serious breach to British Columbians?”the Opposition MLA Todd Stone asked. Salter clarified that the cybersecurity centre advised against public disclosure to prevent other hackers from exploiting vulnerabilities in government networks. She revealed three separate cybersecurity incidents, all involving efforts by the hackers to conceal their activities. Following a briefing of the B.C. NDP cabinet on May 8, the cyber centre concurred that the public could be notified. Salter said that over 40 terabytes of data was being analyzed but she did not specify if the hackers targeted specific areas of government records such as health data, auto insurance or social services. The province stores the personal data of millions of British Columbians, including social insurance numbers, addresses and phone numbers. Public Safety Minister and Solicitor General Mike Farnworth told reporters Friday that no ransom demands were received, making the motivation behind the multiple cyberattacks unclear.

Farnworth said that the CCCS believes a state-sponsored actor is behind the attack based on the sophistication of the attempted breaches.

"Being able to do what we are seeing, and covering up their tracks, is the hallmarks of a state actor or a state-sponsored actor." - Farnworth
Government sources told CTV News that various government ministries and agencies, and their respective websites, networks and servers, face approximately 1.5 billion “unauthorized access” or hacking attempts daily. The number has increased over the last few years and the reason why the province budgets millions of dollars per year to cybersecurity. Salter confirmed the government spends more than $25 million a year to fortify its defenses and added that previous investments in B.C.'s cybersecurity infrastructure helped detect the multiple attacks last month. Microsoft last month alerted several U.S. federal agencies that Russia-backed hackers might have pilfered emails sent by the company to those agencies, including sensitive information like usernames and passwords. However, Salter did not confirm if Russian-backed hackers are associated with the B.C. security breach. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

The top 3 ways to use generative AI to empower knowledge workers 

8 May 2024 at 09:35

Though generative AI is still a nascent technology, it is already being adopted by teams across companies to unleash new levels of productivity and creativity. Marketers are deploying generative AI to create personalized customer journeys. Designers are using the technology to boost brainstorming and iterate between different content layouts more quickly. The future of technology is exciting, but there can be implications if these innovations are not built responsibly.

As Adobe’s CIO, I get questions from both our internal teams and other technology leaders: how can generative AI add real value for knowledge workers—at an enterprise level? Adobe is a producer and consumer of generative AI technologies, and this question is urgent for us in both capacities. It’s also a question that CIOs of large companies are uniquely positioned to answer. We have a distinct view into different teams across our organizations, and working with customers gives us more opportunities to enhance business functions.

Our approach

When it comes to AI at Adobe, my team has taken a comprehensive approach that includes investment in foundational AI, strategic adoption, an AI ethics framework, legal considerations, security, and content authentication. ​The rollout follows a phased approach, starting with pilot groups and building communities around AI. ​

This approach includes experimenting with and documenting use cases like writing and editing, data analysis, presentations and employee onboarding, corporate training, employee portals, and improved personalization across HR channels. The rollouts are accompanied by training podcasts and other resources to educate and empower employees to use AI in ways that improve their work and keep them more engaged. ​

Unlocking productivity with documents

While there are innumerable ways that CIOs can leverage generative AI to help surface value at scale for knowledge workers, I’d like to focus on digital documents—a space in which Adobe has been a leader for over 30 years. Whether they are sales associates who spend hours responding to requests for proposals (RFPs) or customizing presentations, marketers who need competitive intel for their next campaign, or legal and finance teams who need to consume, analyze, and summarize massive amounts of complex information—documents are a core part of knowledge workers’ daily work life. Despite their ubiquity and the fact that critical information lives inside companies’ documents (from research reports to contracts to white papers to confidential strategies and even intellectual property), most knowledge workers are experiencing information overload. The impact on both employee productivity and engagement is real.  

Lessons from customer zero

Adobe invented the PDF and we’ve been innovating new ways for knowledge workers to get more productive with their digital documents for decades. Earlier this year, the Acrobat team approached my team about launching an all-employee beta for the new generative AI-powered AI Assistant. The tool is designed to help people consume the information in documents faster and enable them to consolidate and format information into business content.

I faced all the same questions every CIO is asking about deploying generative AI across their business— from security and governance to use cases and value. We discovered the following three specific ways where generative AI helped (and is still helping) our employees work smarter and improve productivity.

  1. Faster time to knowledge
    Our employees used AI Assistant to close the gap between understanding and action for large, complicated documents. The generative AI-powered tool’s summary feature automatically generates an overview to give readers a quick understanding of the content. A conversational interface allows employees to “chat” with their documents and provides a list of suggested questions to help them get started. To get more details, employees can ask the assistant to generate top takeaways or surface only the information on a specific topic. At Adobe, our R&D teams used to spend more than 10 hours a week reading and analyzing technical white papers and industry reports. With generative AI, they’ve been able to nearly halve that time by asking questions and getting answers about exactly what they need to know and instantly identifying trends or surfacing inconsistencies across multiple documents.

  2. Easy navigation and verification
    AI-powered chat is gaining ground on traditional search when it comes to navigating the internet. However, there are still challenges when it comes to accuracy and connecting responses to the source. Acrobat AI Assistant takes a more focused approach, applying generative AI to the set of documents employees select and providing hot links and clickable citations along with responses. So instead of using the search function to locate random words or trying to scan through dozens of pages for the information they need, AI Assistant generates both responses and clickable citations and links, allowing employees to navigate quickly to the source where they can quickly verify the information and move on, or spend time deep diving to learn more. One example of where generative AI is having a huge productivity impact is with our sales teams who spend hours researching prospects by reading materials like annual reports as well as responding to RFPs. Consuming that information and finding just the right details for RPFs can cost each salesperson more than eight hours a week. Armed with AI Assistant, sales associates quickly navigate pages of documents and identify critical intelligence to personalize pitch decks and instantly find and verify technical details for RFPs, cutting the time they spend down to about four hours.

  3. Creating business content
    One of the most interesting use cases we helped validate is taking information in documents and formatting and repurposing that information into business content. With nearly 30,000 employees dispersed across regions, we have a lot of employees who work asynchronously and depend on technology and colleagues to keep them up to date. Using generative AI, employees can now summarize meeting transcripts, surface action items, and instantly format the information into an email for sharing with their teams or a report for their manager. Before starting the beta, our communications teams reported spending a full workday (seven to 10 hours) per week transforming documents like white papers and research reports into derivative content like media briefing decks, social media posts, blogs, and other thought leadership content. Today they’re saving more than five hours a week by instantly generating first drafts with the help of generative AI.

Simple, safe, and responsible

CIOs love learning about and testing new technologies, but at times they can require lengthy evaluations and implementation processes. Acrobat AI Assistant can be deployed in minutes on the desktop, web, or mobile apps employees already know and use every day. Acrobat AI Assistant leverages a variety of processes, protocols, and technologies so our customers’ data remains their data and they can deploy the features with confidence. No document content is stored or used to train AI Assistant without customers’ consent, and the features only deliver insights from documents users provide. For more information about Adobe is deploying generative AI safely, visit here.

Generative AI is an incredibly exciting technology with incredible potential to help every knowledge worker work smarter and more productively. By having the right guardrails in place, identifying high-value use cases, and providing ongoing training and education to encourage successful adoption, technology leaders can support their workforce and companies to be wildly successful in our AI-accelerated world.  

This content was produced by Adobe. It was not written by MIT Technology Review’s editorial staff.

Multimodal: AI’s new frontier

Multimodality is a relatively new term for something extremely old: how people have learned about the world since humanity appeared. Individuals receive information from myriad sources via their senses, including sight, sound, and touch. Human brains combine these different modes of data into a highly nuanced, holistic picture of reality.

“Communication between humans is multimodal,” says Jina AI CEO Han Xiao. “They use text, voice, emotions, expressions, and sometimes photos.” That’s just a few obvious means of sharing information. Given this, he adds, “it is very safe to assume that future communication between human and machine will also be multimodal.”

A technology that sees the world from different angles

We are not there yet. The furthest advances in this direction have occurred in the fledgling field of multimodal AI. The problem is not a lack of vision. While a technology able to translate between modalities would clearly be valuable, Mirella Lapata, a professor at the University of Edinburgh and director of its Laboratory for Integrated Artificial Intelligence, says “it’s a lot more complicated” to execute than unimodal AI.

In practice, generative AI tools use different strategies for different types of data when building large data models—the complex neural networks that organize vast amounts of information. For example, those that draw on textual sources segregate individual tokens, usually words. Each token is assigned an “embedding” or “vector”: a numerical matrix representing how and where the token is used compared to others. Collectively, the vector creates a mathematical representation of the token’s meaning. An image model, on the other hand, might use pixels as its tokens for embedding, and an audio one sound frequencies.

A multimodal AI model typically relies on several unimodal ones. As Henry Ajder, founder of AI consultancy Latent Space, puts it, this involves “almost stringing together” the various contributing models. Doing so involves various techniques to align the elements of each unimodal model, in a process called fusion. For example, the word “tree”, an image of an oak tree, and audio in the form of rustling leaves might be fused in this way. This allows the model to create a multifaceted description of reality.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Desperate Taylor Swift fans defrauded by ticket scams

8 May 2024 at 09:54

Ticket scams are very common and apparently hard to stop. When there are not nearly enough tickets for some concerts to accommodate all the fans that desperately want to be there, it makes for ideal hunting grounds for scammers.

With a ticket scam, you pay for a ticket and you either don’t receive anything or what you get doesn’t get you into the venue.

As reported by the BBC, Lloyds Bank estimates that fans have lost an estimated £1m ($1.25 m) in ticket scams ahead of the UK leg of Taylor Swift’s Eras tour. Roughly 90% of these scams were said to have started on Facebook.

Many of these operations work with compromised Facebook accounts and make both the buyer and the owner of the abused account feel bad. These account owners are complaining about the response, or lack thereof, they are getting from Meta (Facebook’s parent company) about their attempts to report the account takeovers.

Victims feel powerless as they see some of their friends and family fall for the ticket scam.

“After I reported it, there were still scams going on for at least two or three weeks afterwards.”

We saw the same last year when “Swifties” from the US filed reports about scammers taking advantage of fans, some of whom lost as much as $2,500 after paying for tickets that didn’t exist or never arrived. The Better Business Bureau reportedly received almost 200 complaints nationally related to the Swift tour, with complaints ranging from refund struggles to outright scams.

Now that the tour has European cities on the schedule the same is happening all over again.

And mind you, it’s not just concerts. Any event that is sold out through the regular, legitimate channels and works with transferable tickets is an opportunity for scammers. Recently we saw a scam working from sponsored search results for the Van Gogh Museum in Amsterdam. People that clicked on the ad were redirected to a fake phishing site where they were asked to fill out their credit card details.

Consider that to be a reminder that it’s easy for scammers to set up a fake website that looks genuine. Some even use a name or website url that is similar to the legitimate website. If you’re unsure or it sounds too good to be true, leave the website immediately.

Equally important to keep in mind is the power of AI which has taken the creation of a photograph of—fake—tickets to a level that it’s child’s play.

How to avoid ticket scams

No matter how desperate you are to visit a particular event, please be careful. When it’s sold out and someone offers you tickets, there are a few precautions you should take.

  • Research the ticket seller. Anybody can set up a fake ticket website, and sponsored ads showing at the top of search engines can be rife with bogus sellers. You may also run into issues buying tickets from sites like eBay. Should you decide to use sites other than well-known entities like Ticketmaster, check for reviews of the seller.
  • Are the tickets transferable? For some events the tickets are non-transferable which makes it, at least, unwise to try and buy tickets from someone who has decided they “don’t need or want them” after all. You may end up with tickets that you can’t use.
  • Use a credit card if possible. You’ll almost certainly have more protection than if you pay using your debit card, or cash. We definitely recommend that you avoid using cash. If someone decides to rip you off, that money is gone forever.
  • A “secure” website isn’t all it seems. While sites that use HTTPS (the padlock) ensure your communication is secure, this does not guarantee the site is legitimate. Anyone can set up a HTTPs website, including scammers.
  • It’s ticket inspector time. One of the best ways to know for sure that your ticket is genuine is to actually look at it. Is the date and time correct? The location? Are the seat numbers what you were expecting to see? It may well be worth calling the event organizers or the event location and confirming that all is as it should be. Some events will give examples of what a genuine ticket should look like on the official website.
  • Use a blocklist. Software like Malwarebytes Browser Guard will block known phishing and scam sites.

Scaling individual impact: Insights from an AI engineering leader

11 April 2024 at 10:00

Traditionally, moving up in an organization has meant leading increasingly large teams of people, with all the business and operational duties that entails. As a leader of large teams, your contributions can become less about your own work and more about your team’s output and impact. There’s another path, though. The rapidly evolving fields of artificial intelligence (AI) and machine learning (ML) have increased demand for engineering leaders who drive impact as individual contributors (ICs). An IC has more flexibility to move across different parts of the organization, solve problems that require expertise from different technical domains, and keep their skill set aligned with the latest developments (hopefully with the added benefit of fewer meetings).

In an executive IC role as a technical leader, I have a deep impact by looking at the intersections of systems across organizational boundaries, prioritize the problems that really need solving, then assemble stakeholders from across teams to create the best solutions.

Driving influence through expertise

People leaders typically have the benefit of an organization that scales with them. As an IC, you scale through the scope, complexity, and impact of the problems you help solve. The key to being effective is getting really good at identifying and structuring problems. You need to proactively identify the most impactful problems to solve—the ones that deliver the most value but that others aren’t focusing on—and structure them in a way that makes them easier to solve.

People skills are still important because building strong relationships with colleagues is fundamental. When consensus is clear, solving problems is straightforward, but when the solution challenges the status quo, it’s crucial to have established technical credibility and organizational influence.

And then there’s the fun part: getting your hands dirty. Choosing the IC path has allowed me to spend more time designing and building AI/ML systems than other management roles would—prototyping, experimenting with new tools and techniques, and thinking deeply about our most complex technical challenges.

A great example I’ve been fortunate to work on involved designing the structure of a new ML-driven platform. It required significant knowledge at the cutting edge and touched multiple other parts of the organization. The freedom to structure my time as an IC allowed me to dive deep in the domain, understand the technical needs of the problem space, and scope the approach. At the same time, I worked across multiple enterprise and line-of-business teams to align appropriate resources and define solutions that met the business needs of our partners. This allowed us to deliver a cutting-edge solution on a very short timescale to help the organization safely scale a new set of capabilities.

Being an IC lets you operate more like a surgeon than a general. You focus your efforts on precise, high-leverage interventions. Rapid, iterative problem-solving is what makes the role impactful and rewarding.

The keys to success as an IC executive

In an IC executive role, there are key skills that are essential. First is maintaining deep technical expertise. I usually have a couple of different lines of study going on at any given time, one that’s closely related to the problems I’m currently working on, and another that takes a long view on foundational knowledge that will help me in the future.

Second is the ability to proactively identify and structure high-impact problems. That means developing a strong intuition for where AI/ML can drive the most business value, and leveraging the problem in a way that achieves the highest business results.

Determining how the problem will be formulated means considering what specific problem you are trying to solve and what you are leaving off the table. This intentional approach aligns the right complexity level to the problem to meet the organization’s needs with the minimum level of effort. The next step is breaking down the problem into chunks that can be solved by the people or teams aligned to the effort.

Doing this well requires building a diverse network across the organization. Building and nurturing relationships in different functional areas is crucial to IC success, giving you the context to spot impactful problems and the influence to mobilize resources to address them.

Finally, you have to be an effective communicator who can translate between technical and business audiences. Executives need you to contextualize system design choices in terms of business outcomes and trade-offs. And engineers need you to provide crisp problem statements and solution sketches.

It’s a unique mix of skills, but if you can cultivate that combination of technical depth, organizational savvy, and business-conscious communication, ICs can drive powerful innovations. And you can do it while preserving the hands-on problem-solving abilities that likely drew you to engineering in the first place.

Empowering IC Career Paths

As the fields of AI/ML evolve, there’s a growing need for senior ICs who can provide technical leadership. Many organizations are realizing that they need people who can combine deep expertise with strategic thinking to ensure these technologies are being applied effectively.

However, many companies are still figuring out how to empower and support IC career paths. I’m fortunate that Capital One has invested heavily in creating a strong Distinguished Engineer community. We have mentorship, training, and knowledge-sharing structures in place to help senior ICs grow and drive innovation.

ICs have more freedom than most to craft their own job description around their own preferences and skill sets. Some ICs may choose to focus on hands-on coding, tackling deeply complex problems within an organization. Others may take a more holistic approach, examining how teams intersect and continually collaborating in different areas to advance projects. Either way, an IC needs to be able to see the organization from a broad perspective, and know how to spot the right places to focus their attention.

Effective ICs also need the space and resources to stay on the bleeding edge of their fields. In a domain like AI/ML that’s evolving so rapidly, continuous learning and exploration are essential. It’s not a nice-to-have feature, but a core part of the job, and since your time as an individual doesn’t scale, it requires dedication to time management.

Shaping the future

The role of an executive IC in engineering is all about combining deep technical expertise with a strategic mindset. That’s a key ingredient in the kind of transformational change that AI is driving, but realizing this potential will require a shift in the way many organizations think about leadership.

I’m excited to see more engineers pursue an IC path and bring their unique mix of skills to bear on the toughest challenges in AI/ML. With the right organizational support, I believe a new generation of IC leaders will emerge and help shape the future of the field. That’s the opportunity ahead of us, and I’m looking forward to leading by doing.

This content was produced by Capital One. It was not written by MIT Technology Review’s editorial staff.

Modernizing data with strategic purpose

Data modernization is squarely on the corporate agenda. In our survey of 350 senior data and technology executives, just over half say their organization has either undertaken a modernization project in the past two years or is implementing one today. An additional one-quarter plan to do so in the next two years. Other studies also consistently point to businesses’ increased investment in modernizing their data estates.

It is no coincidence that this heightened attention to improving data capabilities coincides with interest in AI, especially generative AI, reaching a fever pitch. Indeed, supporting the development of AI models is among the top reasons the organizations in our research seek to modernize their data capabilities. But AI is not the only reason, or even the main one.

This report seeks to understand organizations’ objectives for their data modernization projects and how they are implementing such initiatives. To do so, it surveyed senior data and technology executives across industries. The research finds that many have made substantial progress and investment in data modernization. Alignment on data strategy and the goals of modernization appear to be far from complete in many organizations, however, leaving a disconnect between data and technology teams and the rest of the business. Data and technology executives and their teams can still do more to understand their colleagues’ data needs and actively seek their input on how to meet them.

Following are the study’s key findings:

AI isn’t the only reason companies are modernizing the data estate. Better decision-making is the primary aim of data modernization, with nearly half (46%) of executives citing this among their three top drivers. Support for AI models (40%) and for decarbonization (38%) are also major drivers of modernization, as are improving regulatory compliance (33%) and boosting operational efficiency (32%).

Data strategy is too often siloed from business strategy. Nearly all surveyed organizations recognize the importance of taking a strategic approach to data. Only 22% say they lack a fully developed data strategy. When asked if their data strategy is completely aligned with key business objectives, however, only 39% agree. Data teams can also do more to bring other business units and functions into strategy discussions: 42% of respondents say their data strategy was developed exclusively by the data or technology team.

Data strategy paves the road to modernization. It is probably no coincidence that most organizations (71%) that have embarked on data modernization in the past two years have had a data strategy in place for longer than that. Modernization goals require buy-in from the business, and implementation decisions need strategic guidance, lest they lead to added complexity or duplication.

Top data pain points are data quality and timeliness. Executives point to substandard data (cited by 41%) and untimely delivery (33%) as the facets of their data operations most in need of improvement. Incomplete or inaccurate data leads enterprise users to question data trustworthiness. This helps explain why the most common modernization measure taken by our respondents’ organizations in the past two years has been to review and upgrade data governance (cited by 45%).

Cross-functional teams and DataOps are key levers to improve data quality. Modern data engineering practices are taking root in many businesses. Nearly half of organizations (48%) are empowering cross-functional data teams to enforce data quality standards, and 47% are prioritizing implementing DataOps (cited by 47%). These sorts of practices, which echo the agile methodologies and product thinking that have become standard in software engineering, are only starting to make their way into the data realm.

Compliance and security considerations often hinder modernization. Compliance and security concerns are major impediments to modernization, each cited by 44% of the respondents. Regulatory compliance is mentioned particularly frequently by those working in energy, public sector, transport, and financial services organizations. High costs are another oft-cited hurdle (40%), especially among the survey’s smaller organizations.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Open-sourcing generative AI

9 April 2024 at 14:35

The views expressed in this video are those of the speakers, and do not represent any endorsement or sponsorship.

Is the open-source approach, which has democratized access to software, ensured transparency, and improved security for decades, now poised to have a similar impact on AI? We dissect the balance between collaboration and control, legal ramifications, ethical considerations, and innovation barriers as the AI industry seeks to democratize the development of large language models.

Explore more from Booz Allen Hamilton on the future of AI


About the speakers

Alison Smith, Director of Generative AI, Booz Allen Hamilton

Alison Smith is a Director of Generative AI at Booz Allen Hamilton where she helps clients address their missions with innovative solutions. Leading Booz Allen’s investments in Generative AI and grounding them in real business needs, Alison employs a pragmatic approach to designing, implementing, and deploying Generative AI that blends existing tools with additional customization. She is also responsible for disseminating best practices and key solutions throughout the firm to ensure that all teams are up-to-date on the latest available tools, solutions, and approaches to common client problems.

In addition to her role at Booz Allen which balances technical solutions and business growth, Alison also enjoys staying connected to and serving her local community. From 2017-2021, Alison served on the board of a non-profit, DC Open Government Coalition (DCOGC), a group that seeks to enhance public access to government information and ensure transparent government operations; in November 2021, Alison was recognized as a Power Woman in Code by DCFemTech.

Alison has an MBA from The University of Chicago Booth School of Business and a BA from Middlebury College.

Taking AI to the next level in manufacturing

Few technological advances have generated as much excitement as AI. In particular, generative AI seems to have taken business discourse to a fever pitch. Many manufacturing leaders express optimism: Research conducted by MIT Technology Review Insights found ambitions for AI development to be stronger in manufacturing than in most other sectors.

image of the report cover

Manufacturers rightly view AI as integral to the creation of the hyper-automated intelligent factory. They see AI’s utility in enhancing product and process innovation, reducing cycle time, wringing ever more efficiency from operations and assets, improving maintenance, and strengthening security, while reducing carbon emissions. Some manufacturers that have invested to develop AI capabilities are still striving to achieve their objectives.

This study from MIT Technology Review Insights seeks to understand how manufacturers are generating benefits from AI use cases—particularly in engineering and design and in factory operations. The survey included 300 manufacturers that have begun working with AI. Most of these (64%) are currently researching or experimenting with AI. Some 35% have begun to put AI use cases into production. Many executives that responded to the survey indicate they intend to boost AI spending significantly during the next two years. Those who haven’t started AI in production are moving gradually. To facilitate use-case development and scaling, these manufacturers must address challenges with talents, skills, and data.

Following are the study’s key findings:

  • Talent, skills, and data are the main constraints on AI scaling. In both engineering and design and factory operations, manufacturers cite a deficit of talent and skills as their toughest challenge in scaling AI use cases. The closer use cases get to production, the harder this deficit bites. Many respondents say inadequate data quality and governance also hamper use-case development. Insufficient access to cloud-based compute power is another oft-cited constraint in engineering and design.
  • The biggest players do the most spending, and have the highest expectations. In engineering and design, 58% of executives expect their organizations to increase AI spending by more than 10% during the next two years. And 43% say the same when it comes to factory operations. The largest manufacturers are far more likely to make big increases in investment than those in smaller—but still large—size categories.
  • Desired AI gains are specific to manufacturing functions. The most common use cases deployed by manufacturers involve product design, conversational AI, and content creation. Knowledge management and quality control are those most frequently cited at pilot stage. In engineering and design, manufacturers chiefly seek AI gains in speed, efficiency, reduced failures, and security. In the factory, desired above all is better innovation, along with improved safety and a reduced carbon footprint.
  • Scaling can stall without the right data foundations. Respondents are clear that AI use-case development is hampered by inadequate data quality (57%), weak data integration (54%), and weak governance (47%). Only about one in five manufacturers surveyed have production assets with data ready for use in existing AI models. That figure dwindles as manufacturers put use cases into production. The bigger the manufacturer, the greater the problem of unsuitable data is.
  • Fragmentation must be addressed for AI to scale. Most manufacturers find some modernization of data architecture, infrastructure, and processes is needed to support AI, along with other technology and business priorities. A modernization strategy that improves interoperability of data systems between engineering and design and the factory, and between operational technology (OT) and information technology (IT), is a sound priority.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tackling AI risks: Your reputation is at stake

8 April 2024 at 12:00

Forget Skynet: One of the biggest risks of AI is your organization’s reputation. That means it’s time to put science-fiction catastrophizing to one side and begin thinking seriously about what AI actually means for us in our day-to-day work.

This isn’t to advocate for navel-gazing at the expense of the bigger picture: It’s to urge technologists and business leaders to recognize that if we’re to address the risks of AI as an industry—maybe even as a society—we need to closely consider its immediate implications and outcomes. If we fail to do that, taking action will be practically impossible.

Risk is all about context

Risk is all about context. In fact, one of the biggest risks is failing to acknowledge or understand your context: That’s why you need to begin there when evaluating risk.

This is particularly important in terms of reputation. Think, for instance, about your customers and their expectations. How might they feel about interacting with an AI chatbot? How damaging might it be to provide them with false or misleading information? Maybe minor customer inconvenience is something you can handle, but what if it has a significant health or financial impact?

Even if implementing AI seems to make sense, there are clearly some downstream reputation risks that need to be considered. We’ve spent years talking about the importance of user experience and being customer-focused: While AI might help us here, it could also undermine those things as well.

There’s a similar question to be asked about your teams. AI may have the capacity to drive efficiency and make people’s work easier, but used in the wrong way it could seriously disrupt existing ways of working. The industry is talking a lot about developer experience recently—it’s something I wrote about for this publication—and the decisions organizations make about AI need to improve the experiences of teams, not undermine them.

In the latest edition of the Thoughtworks Technology Radar—a biannual snapshot of the software industry based on our experiences working with clients around the world—we talk about precisely this point. We call out AI team assistants as one of the most exciting emerging areas in software engineering, but we also note that the focus has to be on enabling teams, not individuals. “You should be looking for ways to create AI team assistants to help create the ‘10x team,’ as opposed to a bunch of siloed AI-assisted 10x engineers,” we say in the latest report.

Failing to heed the working context of your teams could cause significant reputational damage. Some bullish organizations might see this as part and parcel of innovation—it’s not. It’s showing potential employees—particularly highly technical ones—that you don’t really understand or care about the work they do.

Tackling risk through smarter technology implementation

There are lots of tools that can be used to help manage risk. Thoughtworks helped put together the Responsible Technology Playbook, a collection of tools and techniques that organizations can use to make more responsible decisions about technology (not just AI).

However, it’s important to note that managing risks—particularly those around reputation—requires real attention to the specifics of technology implementation. This was particularly clear in work we did with an assortment of Indian civil society organizations, developing a social welfare chatbot that citizens can interact with in their native languages. The risks here were not unlike those discussed earlier: The context in which the chatbot was being used (as support for accessing vital services) meant that inaccurate or “hallucinated” information could stop people from getting the resources they depend on.

This contextual awareness informed technology decisions. We implemented a version of something called retrieval-augmented generation to reduce the risk of hallucinations and improve the accuracy of the model the chatbot was running on.

Retrieval-augmented generation features on the latest edition of the Technology Radar. It might be viewed as part of a wave of emerging techniques and tools in this space that are helping developers tackle some of the risks of AI. These range from NeMo Guardrails—an open-source tool that puts limits on chatbots to increase accuracy—to the technique of running large language models (LLMs) locally with tools like Ollama, to ensure privacy and avoid sharing data with third parties. This wave also includes tools that aim to improve transparency in LLMs (which are notoriously opaque), such as Langfuse.

It’s worth pointing out, however, that it’s not just a question of what you implement, but also what you avoid doing. That’s why, in this Radar, we caution readers about the dangers of overenthusiastic LLM use and rushing to fine-tune LLMs.

Rethinking risk

A new wave of AI risk assessment frameworks aim to help organizations consider risk. There is also legislation (including the AI Act in Europe) that organizations must pay attention to. But addressing AI risk isn’t just a question of applying a framework or even following a static set of good practices. In a dynamic and changing environment, it’s about being open-minded and adaptive, paying close attention to the ways that technology choices shape human actions and social outcomes on both a micro and macro scale.

One useful framework is Dominique Shelton Leipzig’s traffic light framework. A red light signals something prohibited—such as discriminatory surveillance—while a green light signals low risk and a yellow light signals caution. I like the fact it’s so lightweight: For practitioners, too much legalese or documentation can make it hard to translate risk to action.

However, I also think it’s worth flipping the framework, to see risks as embedded in contexts, not in the technologies themselves. That way, you’re not trying to make a solution adapt to a given situation, you’re responding to a situation and addressing it as it actually exists. If organizations take that approach to AI—and, indeed, to technology in general—that will ensure they’re meeting the needs of stakeholders and keep their reputations safe.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Purpose-built AI builds better customer experiences

In the bygone era of contact centers, the customer experience was tethered to a singular channel—the phone call. The journey began with a pre-recorded message prompting the customer to press a number corresponding to their query. Today’s contact centers have evolved from the confines of just traditional phone calls to multiple channels from emails to social media to chatbots.

Customers have access to more business information than ever. But improving the quality of customer experiences means becoming more customer-centric and data-driven and scaling available human representatives for round-the-clock assistance.

Enabling these improvements is no small feat for enterprises, though, says senior product marketing manager at NICE, Michele Carlson. With large data streams and the demand for personalized experiences, artificial intelligence has become the key enabler in fostering these better customer experiences.

“There’s such an enormous amount of data available that without artificial intelligence as this driving force for better customer experiences, it would be impossible to meet customer’s expectations today.”

Amid the many moving parts in a contact center from managing multiple incoming calls to taking accurate notes of each interaction to measuring success metrics, AI can help smooth friction. Sentiment analysis can help supervisors identify in real-time which calls require escalation or further support and AI tools can summarize calls and automate note-taking to free up agents to focus more closely on customer needs. These use cases not only improve customer and employee experiences but also save time and money.

While the promises of AI have many enterprises making swift investments, Carlson cautions leaders to be goal-oriented first. Rather than deploy AI because it’s popular, AI-driven solutions need to be purpose-built to support and align with goals. 

“There are so many available artificial intelligence solutions right now, but it’s really critical to choose AI that is designed and built on data that is specific to your organization,” says Carlson.

Looking ahead, Carlson sees the evolution toward AI-enabled customer centricity as a signal of a customer experience paradigm shift where AI will augment not just operational details but offer insights into high-level business strategy.

“As everyone gets introduced to this technology,” says Carlson, “it’s going to be those that are open to using new things and open to using AI, but also the ones that are selecting the right types of artificial intelligence to compliment their business that are going to be the most successful in using it, and gaining the efficiency and optimizing the customer experiences.”


This episode of Business Lab is produced in partnership with NICE.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is deploying customer service with AI to maximize results. As artificial intelligence evolves in the call center, it can provide real-time guidance. But measuring success remains key to operational efficiency and customer satisfaction.

Two words for you: better service.

My guest is Michele Carlson, senior product marketing manager at NICE.

This podcast is produced in partnership with NICE.

Welcome, Michele.

Michele: Thank you so much, Laurel. I’m so excited to be here.

Laurel: Well, welcome. And let’s begin by setting some context for our conversation. Long ago, in technology years, one would talk to a live person when calling a customer service number, and then we moved on to automated menu choices and beyond. So how have call centers evolved to better serve customers? And to bring us up to the present day, how is AI an enabler of that evolution?

Michele: The really good place to get started is, how did this all begin? So right now, contact centers are more customer-focused than they’ve ever been. Like you mentioned, they first started with a call, or maybe what we call an IVR, or an internal voice recording, where you would put in a phone number, or put in a number if you wanted to go to a certain queue to answer a certain type of question. And now we’ve advanced far beyond that.

So there still are things like IVRs in the market, but there are more channels than ever now that customers are interacting with. So it’s not just the phone calls, it’s email, it’s social, it’s the chatbots on their website. It’s the more sophisticated website. So there’s more places that customers can get information about a business than ever before. So that’s something that’s really changed for contact centers.

So the way that they’re really handling that to give better customer experience, and to engage more with their customers, is focusing more on becoming customer-centric. Which are things like more personalization, being more data-driven, having greater availability for their agents. And all of these options that, for us as consumers, are really exciting because we can reach out to a business in many different ways at many different hours of the day, 24/7 access to get our questions answered.

While this is exciting for customers, it also creates a challenge for contact centers. Because, yes, it’s a way that they can evolve to serve their customers in all these places, but it’s a challenge for them. And you asked about artificial intelligence or AI, how is AI supporting that?

And that’s a big enabler for contact centers to be able to deliver these better experiences to customers, because there are so many channels, there’s so much need and expectation for personalization. There is a need to be more data-driven. And artificial intelligence allows businesses, allows contact centers, to evaluate and see what their customers are calling about, when their customers are going to call, what channels their customers are interacting with, and even the questions that customers are asking on different channels.

Using all of that data is a way that they can personalize and deliver better experiences. And artificial intelligence allows them to look at all that data. There’s such an enormous amount of data available that without artificial intelligence as this driving force for better customer experiences, it would be impossible to meet customer’s expectations today.

And so it’s really exciting to think … As you mentioned, it’s a long time ago in technology years, which is really a very short time. We’ve seen this evolution really pick up pace in the last few years with the integration of things like conversational AI and generative AI into that contact center space. And we’ll talk more about those in the course of our conversation, too.

Laurel: So yeah, speaking of data, it’s such a central role to most technology deployments and digital transformations. So then, what is the role of data in this context? And how can organizations best manage and use the data, since it is coming from so many different places as well as where it needs to be saved, to ensure a more efficient experience with contact centers?

Michele: Yeah, so the role data plays in our world today is a substantial one. “Data is the new oil.” It’s not my quote, but I’ll borrow it.

And data, there is so much of it. And the idea is it’s so very valuable, and it’s really critical to have all this data gathered together to be able to use it and be able to understand it.

So what contact centers are doing, the ones that are really successful in this, is they’re benefiting by aligning their data and building what we’re calling an interaction-centric approach.

Rather than saying I’m just going to look at my customers in a web version, or I’m just going to look at my customers through voice, being able to look at data from all over and all these different places makes this interaction-centric approach really crucial to getting started and using the data in a way that makes sense for the business.

So this is allowing them to move from things like voice and digital messaging to chatbots and social media, just on one platform. So if you or me, if we were to call into a contact center, they would know where our journey has gone. If we went to the website, if we went to the chatbot, if we called, how our call went, who we spoke with, what the outcome of that interaction was.

And that lens, in having the data, is more powerful in keeping this customer-centric approach, or this customer-centric mindset. Because it brings together all of these touch points on one channel, so that you can move interactions into one platform, which allows all these organizations to then look at different types of applications and solutions to solve different problems within their contact centers and their customer experience groups.

Laurel: So could you share some of those specific examples of how AI-driven solutions can address these unique challenges in contact centers, and also provide improvements in both customer and employee experiences?

Michele: Yes, of course. And I really like how you frame that question. Because it’s about both the customer experience and the employee experience. Without helping your employees and supporting your employees, it would be very difficult to provide, in turn, that great customer experience. And artificial intelligence-driven, AI-driven types of solutions, just to go back to that previous question around data, the AI solutions are only as good as the data that’s available to them.

So in a contact center where customer experience is the goal, you want your artificial intelligence and the data to be driven off of interactions with your customers, and that’s a very crucial foundational element across the board in choosing and using an artificially intelligent solution. One of the ways that organizations are doing this, they’re thinking about, we started with that IVR [interactive voice response]. By the time I get to item nine in the menu, I’ve usually forgotten what the previous items are.

But rather than using an IVR, you can use artificially intelligent routing. So you can predict why a customer is calling, who and which agent they might best interact with. And you can use data kind of on both sides to understand the customer’s needs, and the agents, to direct the call so it has the best outcome.

Once the interaction begins, we can use data, artificial intelligence, to measure sentiment, customer sentiment. And in the course of the interaction, an agent can get a notification from their supervisor that says, “Here’s a couple different things that you can do to help improve this call.” Or, “Hey, in our coaching session, we talked about being more empathetic, and that’s what this means for this customer.” So, giving specific prompts to make the interaction move better in real-time.

Another example supervisors are also burdened with; they usually have a large team of somewhere up to 20, sometimes 25 different agents who all have calls going at the same time.

And it’s difficult for supervisors to keep a pulse on, who is on which interaction with what customer? And is this escalation important, or which is the most important place? Because we can only be one place at one time. As much as we try with modern technology to do many things, we can only do one really well at once.

So for supervisors, they can get a notification about which calls are in need of escalation, and where they can best support their agent. And they can see how their teams are performing at one time as well.

Once the call is over, artificial intelligence can do things like summarize the interaction. During a context interaction, agents take in a lot of information. And it is difficult to then decipher that, and their next call is going to be coming in very quickly. So artificial intelligence can generate a summary of that interaction, instead of the agent having to write notes.

And this is a huge improvement because it improves the experience for customers. That next time they call, they know those notes are going to go over to the agent, the agent can use them. Agents also really appreciate this, because it’s difficult for them in shorthand to recreate very complicated, in healthcare for example, all of the different coding numbers for different types of procedures, or are the provider, or multiple providers, or explanations of benefits to summarize all of that concisely before they take their next call.

So an auto-summarization tool does that automatically based off of the conversation, saving the agents up to a minute of post-call notes, but also saving businesses upwards of $14 million a year for 1,000 agents. Which is great, but agents appreciate it because 85% of them don’t really like all of their desktop applications. They have a lot of applications that they manage. So artificial intelligence is helping with these call summaries.

It can also help with reporting after the fact, to see how all of the calls are trending, is there high sentiment or low sentiment? And also in the quality management aspect of managing a contact center, every single call is evaluated for compliance, for greeting, for how the agent resolved the call. And one of the big challenges in quality management without artificial intelligence is that it’s very subjective.

So you and I could listen to the same call, and we could have very different viewpoints of how the call went. And agents, it’s difficult for them to get conflicting feedback on their performance. And so artificial intelligence can listen to the call, extract data points baseline, and consistently evaluate every single interaction that’s coming into a contact center.

They get better feedback and then they grow, they learn, they have a better overall experience because of this consistency in the evaluations.

So to answer your question, there are a lot of different ways artificial intelligence can support these contact center needs. And if you’re a business and customer satisfaction is your main goal, it’s really critical to think about not just one point of an interaction you have with a customer, but really before, during, and after every interaction, there’s all these opportunities to bring in data for greater consistency, and that’s something that is gained through using artificial intelligence.

Laurel: Yeah, that’s certainly quite a bit there. So when a company is thinking about integrating AI into their customer experiences, what are some common pitfalls they need to look out for, and how can those be mitigated or avoided?

Michele: Yeah, I think one of the most common pitfalls, and we’re all attracted to what’s new and exciting, and artificial intelligence is definitely on that list. And one of the reasons, or one of the pitfalls I’ve seen as organizations are getting started, they focus on too much on using AI.

Somebody said they read a cool article, “We’ve got to use AI for that.” And yeah, you could use AI for that. But really you’re choosing a type of technology, or you’re choosing artificial intelligence, to solve a specific problem. So what I would encourage everyone to do is, think about what is your goal? And then choose AI-driven solutions to then support and align with your goals.

So for typical goals in the contact center, these might be around measuring customer experience like CSAT, sentiment, first call resolution, average handle time, a digital resolution rate, digital containment rate. These are all different types of metrics or goals an organization could have.

But among the chief dos and don’ts is, make sure you’re choosing AI that is specific to what your goals are. I would say very close second is making sure you’re choosing AI that is purpose-built for customer experience. Or purpose-built for, if you’re not in a contact center, whatever your specific type of organization does.

There are so many available artificial intelligence solutions right now, but it’s really critical to choose AI that is designed and built on data that is specific to your organization. So in this instance, customer experience.

And that allows you to benefit from how those models and how that AI is built so that you can use something out of the box. You don’t have to build everything on your own, because that could be very time-consuming. And also creates some ethical dilemmas if you don’t have a large enough data set because your AI is only going to be as good as the data that it’s trained upon. So you want to make sure it has as much data, and relevant data, for your use case as possible.

Laurel: So you did touch on this a little bit. Which is, how can AI and automation enhance the day-to-day work of contact center agents without creating additional challenges? How can it actually continuously improve both the employee and customer experience?

Michele: Yeah, of course. So I’ll give a couple more examples. I think there is a few I gave earlier. So the first I think is just being objective about how a call has been handled, I think that’s one of the most critical use cases.

And so at NICE, we have AI models that learn these different agent soft skills. So everything from how to ask good probing questions, to being empathetic, to taking ownership and resolving an issue efficiently. These models are looking at how to do that. And I think that’s one of the pieces that helps in the day-to-day work for contact center agents. Because they are getting consistent feedback on how they’re performing, but also the models continue to improve over time as well because you’re giving the models new data to work from, new calls, new interactions. And then that is improving both the evaluations for the agents, but it’s improving the customer experience as well.

Because if your baseline is that your sentiment level was at a five, and now you’ve expanded the baseline and you’ve increased your baseline and now you’re at an eight, you’re consistently improving in that way where you’ve now, one, measured what you want to do, which is improve customer experience. You’ve given your agents a measurement, a consistent measurement to deliver on your goal. And then three, you’re continuing to measure over time as you have more different interactions.

So not only are your agents getting better, but your models become more finely-tuned for your organization as well.

Laurel: So as we’re discussing this, in terms of coaching and training agents, how can AI-driven tools effectively provide that kind of real-time guidance without being intrusive? But then also, strike that balance between support and autonomy for the agents?

Michele: Yeah, and I think that’s a great place to be thinking about. If you are a contact center agent, you are on the phone, and then you’re also multitasking on your screen. You’re looking for data, you’re looking for information, you have the customer’s card and hopefully information up from their previous interaction. You have maybe an IM message with your supervisor, you have a lot going on at one time.

So I think when you’re thinking about things like real-time guidance, and coaching and training, this is where it becomes really crucial. I mentioned this being interaction-centric and having everything on one platform, but having the ability to use that sentiment data or customer satisfaction data in multiple places can be very powerful. Because then you’re not introducing new information in real time.

I think that’s the biggest piece to be aware of. Is that in real time, that is not the first time agents are seeing this information about how they could become more empathetic, or how they can deliver on their coaching that they had with their supervisor in a previous interaction.

So by putting everything, anchoring in on this interaction-centric piece and then converging everything on one type of a data platform. In the industry we call it CCaaS, contact center as a service. By delivering on one platform, you enable your organization to use the same data point in multiple places.

So the agent is using this data, they get a popup in real time. But they’ve also had conversations with their supervisor about these skill sets after their previous interactions. And it’s that cycle, and it’s that consistency, that makes agents better aware of and more adaptable for this environment. So that you’re not going to them and giving them yet another thing that they need to resolve, but you’re providing them with information that is relevant and real-time for that particular interaction that they’re on.

Laurel: So you’ve touched on this here and there, but a key element of deploying any kind of new tool or technology is measuring its success. What metrics should organizations then prioritize to measure both customer satisfaction and operational efficiency?

Michele: Some of the key metrics that organizations most focus on in the contact center are net promoter score, so NPS score, or a customer satisfaction score. Those are key measurements of how a customer perceives the interaction. You’ll also see things like customer sentiment, how a customer is feeling about the interaction, included in those measurements.

And then you get into some measurements that are more around the length of the call or efficiency-driven, like an average handle time or an average talk time. And I would say between the CSAT-type measurements and efficiency-type measurements, those make up the measurements for many of the voice types of interactions. So, how long a call is correlates directly to the cost of the call.

Then what’s kind of exciting in this new space is, there’s a lot more organizations that are moving into digital interactions as well. And organizations are looking at things like digital containment, or the number of digital resolutions. How many customer questions was my website or my chatbot able to resolve?

Those then translate into what could be cost savings for a voice interaction. Voice interactions are about a hundred times more expensive than a web or chatbot interaction. So by building effective chatbots, by building effective IVAs, they are also, in turn, improving their overall cost goals for their organization.

I’d say the other metric that everyone is focused on as well is agent retention. So are you giving your tools to your agents to support them in the coaching process and the quality process, in their interactions so that they have a better experience with your organization in answering questions, and that you’re giving them tools to grow as well?

Being a contact center agent is probably one of the hardest and most difficult jobs in that business space. And they are on the phone, they’re inundated with information. So any tools that you can provide them with to help them access information more quickly is hugely beneficial.

Laurel: So it’s clear that there’s lots of opportunities for greater efficiencies and optimizing customer experiences. But looking into the future, how do you see AI and customer experience evolving?

Michele: I think there’s definitely going to be more use cases where we see … And here at NICE, we’re already integrating generative AI and conversational AI into our solutions. And as you adapt these new technologies, it’s only going to build upon itself, where there’s going to be more evolutions in this space.

I think one of the most exciting things that we’ve introduced recently is this idea of using generative AI. So we’ve put guardrails around it, and the guardrails are really crucial when you’re working with artificial intelligence and the large language models, LLMs. We’ve all played with ChatGPT or Claude, and you can interact with those.

And what is really exciting that we’ve done is, we’ve used that type of technology to generate conversations and answers and information. But we’ve put guardrails on it so that organizations can better interact with just their customer experience specific data.

And what this means is, when you are in leadership in an organization, for example, if you were looking for a report, it may take you 12 emails multiple times back and forth saying what you’d like to see in that report. But if you have, again, all of these interactions on one platform, you’ve made it interaction-centric, you’re using all these solutions that kind of compliment each other for every part of the interaction.

What you can do is, instead of emailing a data analyst back and forth for a report, you could interact with generative AI. You could type a question to say, “Hey, who are my top 10 performing agents by sentiment, and what are their key skills that they are using in those interactions?” Then you can generate a report based off of that.

What we’re seeing is that all of these solutions are not necessarily replacing people, but we’re seeing a lot of AI-adjacent or AI-augmented interactions in this contact center space that are coming into play.

And what this is doing is, it’s allowing decision-makers to focus more on their overall strategy and the overall experience that they’re delivering to customers. Rather than being very specific in emailing about a report, or even for agents to be able to type into a conversational AI interface that they can look for specific types of information, rather than searching everywhere for it.

So we are seeing a lot more AI augmented users. And so as everyone gets introduced to this technology, it’s going to be those that are open to using new things and open to using AI, but also the ones that are selecting the right types of artificial intelligence to compliment their business that are going to be the most successful in using it, and gaining the efficiency and optimizing the customer experiences.

Laurel: That’s great insight, Michele. Thank you so much for being on the Business Lab today.

Michele: Thanks so much, Laurel. It was great to be here.

Laurel: That was Michele Carlson, who is the senior product marketing manager at Nice, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the global director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web, and at events each year around the world.

For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studio. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Scaling customer experiences with data and AI

Today, interactions matter more than ever. According to data compiled by NICE, once a consumer makes a buying decision for a product or service, 80% of their decision to keep doing business with that brand hinges on the quality of their customer service experience, according to NICE research. Enter AI.

“I think AI is becoming a really integral part of every business today because it is finding that sweet spot in allowing businesses to grow while finding key efficiencies to manage that bottom line and really do that at scale,” says vice president of product marketing at NICE, Andy Traba.

When many think of AI and customer experiences, chatbots that give customers more headaches than help often come to mind. However, emerging AI use cases are enabling greater efficiencies than ever. From sentiment analysis to co-pilots to integration throughout the entire customer journey, the evolving era of AI is reducing friction and building better relationships between enterprises and both their employees and customers.

“When we think about bolstering AI capabilities, it’s really about getting the right data to train my models on so that they have those best outcomes.”

Deploying any technology requires a delicate balance between delivering quality solutions without compromising the bottom line. AI integration offers investment returns by scaling customer and employee capabilities, automating tedious and redundant tasks, and offering consistent experiences based on collected and specialized data.

“I think as you’re hopefully venturing into leveraging AI more to improve your business, the key recommendation I would provide is just to focus on those crystal clear high-probability use cases and get those early wins and then reinvest back into the business,” says Traba.

While artificial intelligence has increasingly grabbed headlines in recent years, augmented intelligence—where AI tools are used to enhance human capabilities rather than automate them—is worthy of similar buzz for its potential in the customer experience space, says Traba.

Currently, the customer experience landscape is highly reactive. Looking ahead, Traba foresees a shift to proactive and predictive customer experiences that blend both AI and augmented intelligence. Say a customer’s device is reaching its end-of-life state. Rather than the customer reaching out to a chatbot or contact center, AI tools would flag the device’s issue early and direct the customer to a live chat with a representative, offering both the efficiency of automation and personalized help from a human representative.

“Where I see the future evolving in terms of customer experiences, is being much more proactive with the convergence of data, these advancements of technology, and certainly generative AI,” says Traba.

This episode of Business Lab is produced in partnership with NICE.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is building better customer and employee experiences with artificial intelligence. Integrating data and AI solutions into everyday business can help provide insights, create efficiencies, and free up time for employees to work on more complicated issues. And all of this builds a better experience for customers.

Two words for you: augmented intelligence.

My guest is Andy Traba, vice president of product marketing at NICE.

This podcast is produced in partnership with NICE.

Welcome Andy.

Andy Traba: Hi Laurel. Thanks for having me.

Laurel: Well, thanks for being here. So to set some context, could you describe the current state of AI within customer experience? Common use cases that come to mind are chatbots, but what are some other applications for AI in this space?

Andy: Thank you. I think it’s a great question to get started, and I think first and foremost, the use of AI is growing everywhere. Certainly, we had this big boom last year where everybody started talking about AI thanks to ChatGPT and a lot of the advancements with generative AI, and we’re certainly seeing a lot more doing now, moving beyond just talking. So just growing a use case of trying to apply AI everywhere to improve experiences. One of the more popular ones, and this technology has been around for some time, is sentiment analysis. So instead of just proactively surveying customers to ask how are they doing, what was their experience like, using AI models to analyze the conversations they’re having with brands and automatically determine that. And it’s also a good use case, I think, to emphasize the importance of data that goes into the training of AI models.

As you think about sentiment analysis, you want to train those models based on the actual customer experience conversations, maybe past records or even surveys. What you want to avoid is training a sentiment model maybe based on movie reviews or Amazon reviews, something that’s not really well connected. So certainly sentiment analysis is a very popular use case that goes beyond just chatbots.

Two other ones I’ll bring up are co-pilots. We’ve seen, certainly, a lot of recent news with the launch of Microsoft Copilot and other forms for copilots within the contact center and certainly helping customer service agents. It’s a very popular use case that we see. The reason driving that demand is the types of conversations that are getting to agents today are much more complex. AI has done a good job of taking away the easy stuff. We no longer have to call into a contact center to reset our passwords, so what’s left over for the agents is much more difficult types of interactions.So being able to assist them in real time with prompts and guidance and recommending knowledge articles to make their job easier and more effective is really popular.

And then the third and final one just on this question is the really kind of rise of AI-driven journeys. Many, many years ago, you and I would call into a contact center, and the only channel we could use was voice. Today, those channels have exploded. There’s social media, there’s messaging, there’s voice, there’s AI assistance that we can chat with. So being able to orchestrate or navigate a customer effectively through that journey and recommend the next best action or the next best channel for them to reduce that complexity is really in demand as well. And how can I even get to a point where I can proactively engage with them on the channel of their choice at the time of day that we’re likely to get a response is certainly an area that we see AI playing an important role today, but even more so in the future those three really sentiment analysis, the rise of co-pilots and then using AI across the entire customer journey.

Laurel: So as AI becomes more popular across enterprises and across industries, why is integrating AI and customer experience then so crucial for today’s business landscape?

Andy: I think it’s so crucial today because it’s finding this sweet spot in terms of business decision-making. When we think of business decision-making, we are often challenged with, am I going to focus on revenue or cost cutting? Am I going to focus on building new products or perfecting my existing products? And rarely has there been a technology that has allowed a business to achieve all of those at once. But we’re seeing that today with AI finding a sweet spot where I can improve revenue and keep customers happy and renewing or even gain new ones without having to spend additional money. I could even do that in a more efficient way with AI. Within AI, I can take a very innovative approach and produce new products that my customers demand and save time and money through efficiencies in making my current products better. I think AI is becoming a really integral part of every business today because it is finding that sweet spot in allowing businesses to grow while finding key efficiencies to manage that bottom line and really do that at scale.

Laurel: And speaking of those efficiencies, employee experience lays that foundation for the customer. But based on your time at NICE and within business operations, how does employee experience affect the overall experience then for customers?

Andy: I think what we’ve seen at NICE is really that customer experience and employee experience are hand in glove. They’re one and the same. They have tremendous correlation between each other. Some examples, just to give some anecdotes, and this is customer experience really happening everywhere. If you go into a car dealership for a Tesla or a BMW, a high-end product, but you are interacting with a salesperson who’s a little pushy or maybe just having a bad day, it’s going to deteriorate the overall customer experience, so that bad employee experience causes a negative effect. Same thing if you go to your favorite local restaurant, but you maybe have a new server who’s not really well-trained or is still figuring out the menu and the logistics that’s going to have a negative spillover effect. And then even on the flip side of that, you can see employee experience having a positive effect on their overall customer experience.

If employees are engaged and they have the right information and the right tools, they can turn a negative into a positive. Think of airlines, a very commoditized industry right now, but if you have a problem with your flight and it got canceled and you have a critical moment of need, that employee from that airline could really turn that experience around by finding a new flight, booking you, making sure that you are on your trip and meeting your destination on time or without very little delay. So I think when we think about experiences at large and the employee and the customer outcomes are very much tied together, we’ve done research here at NICE on this exact topic, and what we found was once a consumer makes a buying decision for a particular product or service, after that point, 80% of that consumer’s decision to continue doing business with that brand is based on the quality of their interactions.

So how those conversations play out, plays a very, very important part of whether or not they will continue doing business with that brand. Today, interactions matter more than ever. To conclude on this question, one of my favorite quotes, customer experience today isn’t just part of the business, it is the business. And I think employees play a really important front role in achieving that.

Laurel: That certainly makes sense. 80% is a huge number, and I think of that in my own experiences, but could you explain the difference between artificial intelligence and augmented intelligence and also how they overlap?

Andy: Yeah, it’s a great question. I think today artificial intelligence is certainly capturing all of the buzz, but what I think is just as buzzworthy is augmented intelligence. So let’s start by defining the two. So artificial intelligence refers to machines mimicking human cognition. And when we think about customer experience, there’s really no better example of that than chatbots or virtual assistants. Technology that allows you to interact with the brand 365 24/7 at any time that you need, and it’s mimicking the conversations that you would normally have with a live human customer service representative. Augmented intelligence on the other hand, is really about AI enhancing human capabilities, increasing the cognitive load of an individual, allowing them to do more with less, saving them time. I think in the domain of customer experience, co-pilots are becoming a very popular example here. How can co-pilots make recommendations, generate responses, automate a lot of the mundane tasks that humans just don’t like to do and frankly aren’t good at?

So I think there’s a clear distinction then between artificial intelligence, really those machines taking on the human capabilities 100% versus augmented, not replacing humans, but lifting them up, allowing them to do more. And where there’s overlap, and I think we’re going to see this trend really start accelerating in the years to come in customer experiences is the blend between those two as we’re interacting with a brand. And what I mean by that is maybe starting out by having a conversation with an intelligent virtual agent, a chatbot, and then seamlessly blending into a human live customer representative to play a specialized role. So maybe as I’m researching a new product to buy such as a cell phone online, I can be able to ask the chatbot some questions and it’s referring to its knowledge base and its past interactions to answer those. But when it’s time to ask a very specific question, I might be elevated to a customer service representative for that brand, just might choose to say, “Hey, when it’s time to buy, I want to ensure you’re speaking to a live individual.” So I think there’s going to be a blend or a continuum, if you will, of these types of interactions you have. And I think we’re going to get to a point where very soon we might not even know is it a human on the other end of that digital interaction or just a machine chatting back and forth? But I think those two concepts, artificial intelligence and augmented intelligence are certainly here to stay and driving improvements in customer experience at scale with brands.

Laurel: Well, there’s the customer journey, but then there’s also the AI journey, and most of those journeys start with data. So internally, what is the process of bolstering AI capabilities in terms of data, and how does data play a role in enhancing both employee and customer experiences?

Andy: I think in today’s age, it’s common understanding really that AI is only as good as the data it’s trained on. Quick anecdote, if I’m an AI engineer and I’m trying to predict what movies people will watch, so I can drive engagement into my movie app, I’m going to want data. What movies have people watched in the past and what did they like? Similarly in customer experience, if I’m trying to predict the best outcome of that interaction, I want CX data. I want to know what’s gone well in the past on these interactions, what’s gone poorly or wrong? I don’t want data that’s just available on the public internet. I need specialized CX data for my AI models. When we think about bolstering AI capabilities, it’s really about getting the right data to train my models on so that they have those best outcomes.

And going back to the example I brought in around sentiment, I think that reinforces the need to ensure that when we’re training AI models for customer experience, it’s done off of rich CX datasets and not just publicly available information like some of the more popular large language models are using.

And I think about how data plays a role in enhancing employee and customer experiences. There’s a strategy that’s important to derive new information or derive new data from those unstructured data sets that often these contact centers and experience centers have. So when we think about a conversation, it’s very open-ended, right? It could go many ways. It is not often predictable and it’s very hard to understand it at the surface where AI and advanced machine learning techniques can help though is deriving new information from those conversations such as what was the consumer’s sentiment level at the beginning of the conversation versus the end. What actions did the agent take that either drove positive trends in that sentiment or negative trends? How did all of these elements play out? And very quickly you can go from taking large unstructured data sets that might not have a lot of information or signals in them to very large data sets that are rich and contain a lot of signals and deriving that new information or understanding, how I like to think of it, the chemistry of that conversation is playing a very critical role I think in AI powering customer experiences today to ensure that those experiences are trusted, they’re done right, and they’re built on consumer data that can be trusted, not public information that doesn’t really help drive a positive customer experience.

Laurel: Getting back to your idea of customer experience is the business. One of the major questions that most organizations face with technology deployment is how to deliver quality customer experiences without compromising the bottom line. So how can AI move the needle in this way in that positive territory?

Andy: Yeah, I think if there’s one word to think about when it comes to AI moving the bottom line, it’s scale. I think how we think of things is really all about scale, allowing humans or employees to do more, whether that’s by increasing their cognitive load, saving them time, allowing things to be more efficient. Again, that’s referring back to that augmented intelligence. And then when we go through artificial intelligence thinking all about automation. So how can we offer customer experience 365 24/7? How can allowing consumers to reach out to a brand at any time that’s convenient boost that customer experience? So doing both of those tactics in a way that moves the bottom line and drives results is important. I think there’s a third one though that isn’t receiving enough attention, and that’s consistency. So we can allow employees to do more. We can automate their tasks to provide more capacity, but we also have to provide consistent, positive experiences.

And where AI and machine learning really help here is finding areas of variability, finding not only the areas of variability but then also the root cause or the driver of those variabilities to close those gaps. And a brand I’ll give a shout out to who I think does this incredibly well is Starbucks. I can go to a Starbucks in any location around the world and order an iced caramel macchiato, and I’m going to get that same drink experience regardless of the thousands of Starbucks locations. And I think that consistency plays a really powerful role in the overall customer experience of Starbucks’ brand. And when you think about the logistics of doing that at scale, it’s incredibly complex and challenging. If you have the data and you have the right tools and the AI, finding those gaps and offering more consistent experiences is incredibly powerful.

Laurel: So could you share some practical strategies and best practices for organizations to leverage AI to empower employees, foster positive and productive work environments, and then also all of this would ultimately improve customer interactions?

Andy: Yeah, I think the overall positive, going back to earlier in our conversation is there are many use cases. AI has a tremendous opportunity in this space. The recommendation I would provide is to focus first on a crystal clear, high-probability use case for your business. Auto summary or the automated note-taking of agents after call work is becoming an increasingly popular one that we’re seeing in the space. And I think the reasons for it are really clear. It’s a win-win-win for the employee, the customer, and the business. It’s a win for the employee because AI is going to automate something that is mundane for them or very procedural. If you think of a customer service representative, they’re taking 40, 50 maybe in upwards of 60 conversations a day during their job, taking notes of what was talked about. What are action items? Very complicated, mundane, tiresome even. They don’t like doing it.

So AI can offload that activity from them, which is a win for the employee. It’s a win for the customer as a lot of times the agents are great at note-taking, especially when they’re doing that so often, which can lead to that unfortunate experience where you have to call back as a consumer and repeat yourself because the agent you’re now talking to can’t understand or doesn’t have good information about what you called or interacted with previously. So from a consumer experience, it helps them because they have to repeat themselves less often. The agent that they’re currently speaking with can offer a more personalized service because they have better notes or history of past interactions.

And then finally, the third win, it’s really good for the business because you’re saving time and money that the agents no longer have to manually do something. We see that 30 to 60 seconds of note-taking at a business with 1,000 employees adds up to be millions of dollars every year. So there’s a clear-cut business case for the business to achieve results, improve customer experience, and improve employee experience at the same time. I think as you’re hopefully venturing into leveraging AI more to improve your business, the key recommendation I would provide is just to focus on those crystal clear high-probability use cases and get those early wins and then reinvest back into the business.

Laurel: Yeah, I think those are the positive aspects of that, but concerns about job loss due to automation tend to crop up with AI deployment. So what are the opportunities that AI integration can provide for organizations and their employees so it’s a win-win for everybody?

Andy: And certainly empathetic to this topic. As with all new technologies, whenever there’s excitement around them, there’s also this uncertainty of what will those long-term outcomes be? But I think when we historically look back, all transformative technologies have boosted GDP and they’ve created more jobs. And so I see no reason to believe this time around will be different. Now those jobs might be different and new roles will emerge. When it comes to customer experience and the employee experience one interesting theory I’m following is, if you think about Apple, they had a really revolutionary model where they branded their employees geniuses. So you’d go into an Apple store and you would speak to a genius, and that model carried through all of their physical flagship stores. A very positive model. Back in the day, people would actually pay money to go speak to a genius or get a priority customer service slot but a model that’s really hard to scale and a model that hasn’t been successful in a virtual environment.

I think when we see AI and a lot of these new technology advancements though, that’s a prime example of maybe a new job that does emerge where if AI is offloading a lot of the interactions to chatbots, what do customer service agents do? Maybe they become geniuses where they’re playing a more proactive, high-value add back to consumers and overall improving the service and the experience there. So I do think that AI will have job shifts, but overall there’ll be a net positive just like there has been with all past transformative technologies.

Laurel: Continuing that look ahead, how do you see the era of AI evolving in terms of customer and employee experience? What excites you about the future in this space?

Andy: This is actually what I’m most excited about is when we think about customer experience today, it’s highly reactive. As a consumer, if I have a problem, I search your website, I interact with your chatbot, I end up talking to a live customer service representative. The consumer is the driving force of everything and the business or the brand is having to be reactive to them. Where I see the future evolving in terms of customer experiences, is being much more proactive with the convergence of data, these advancements of technology, and certainly generative AI. I do see AI becoming smarter and being more predictive and proactive to alert that there is going to be a problem before the consumer actually is experiencing it and to take action on that proactively before that problem manifests itself.

And just a quick example of maybe there’s a media or a cable company where a device is reaching its end-of-life state, so rather than it have it go on the fritz the day of the Super Bowl, reach out, be proactive, contact that individual, give them specific instructions to follow. And I think that’s really where we see the advancements of not only big data, AI, but just the abundance of the ability to reach out in preferred channels, whether that’s a simple SMS or a high-touch service representative reaching out really where the future of customer experience moves to a much more proactive state from its reactive state today.

Laurel: Well, thank you so much, Andy. I appreciate your time, and thank you for joining us on the Business Lab today.

Andy: Thanks. This was an excellent conversation, Laurel, and thanks again for having me.

Laurel: That was Andy Traba, who is the vice president of product marketing at NICE, who I spoke with from Cambridge Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the global director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Building a more reliable supply chain

In 2021, when a massive container ship became wedged in the Suez Canal, you could almost hear the collective sigh of frustration around the globe. It was a here-we-go-again moment in a year full of supply chain hiccups. Every minute the ship remained stuck represented about $6.7 million in paralyzed global trade.

The 12 months leading up to the debacle had seen countless manufacturing, production, and shipping snags, thanks to the covid-19 pandemic. The upheaval illuminated the critical role of supply chains in consumers’ everyday lives—nothing, from baby formula to fresh produce to ergonomic office chairs, seemed safe.

For companies producing just about any physical product, the many “black swan” events (catastrophic incidents that are nearly impossible to predict) of the last four years illustrate the importance of supply chain resilience—businesses’ ability to anticipate, respond, and bounce back. Yet many organizations still don’t have robust measures in place for future setbacks.

In a poll of 250 business leaders conducted by MIT Technology Review Insights in partnership with Infosys Cobalt, just 12% say their supply chains are in a “fully modern, integrated” state. Almost half of respondents’ firms (47%) regularly experience some supply chain disruptions—nearly one in five (19%) say they feel “constant pressure,” and 28% experience
“occasional disruptions.” A mere 6% say disruptions aren’t an issue. But there’s hope on the horizon. In 2024, rapidly advancing technologies are making transparent, collaborative, and data-driven supply chains more realistic.

“Emerging technologies can play a vital role in creating more sustainable and circular supply chains,” says Dinesh Rao, executive vice president and co-head of delivery at digital services and consulting company Infosys. “Recent strides in artificial intelligence and machine learning, blockchain, and other systems will help build the ability to deliver future-ready, resilient supply chains.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Decarbonizing production of energy is a quick win 

By: ADNOC
14 March 2024 at 02:00

Debate around the pace and nature of decarbonization continues to dominate the global news agenda, from the European Scientific Advisory Board on Climate Change warning that the EU must double annual emissions cuts, to forecasts that it could cost more than $1 trillion to decarbonize the global shipping industry. Despite differing opinions on the right path to net zero, all agree that every sector needs to reduce emissions to avoid the worst effects of climate change.

Oil and gas production accounts for 15% of the world’s emissions, according to the International Energy Agency. Some of the largest global companies have embarked on bold plans to cut to zero by 2050 the carbon and methane associated with their production. One player with an ambition to get there five years ahead of the rest is the UAE’s ADNOC, having announced in January 2024 it will lift spending on decarbonization projects to $23 billion from $15 billion.  

In an exclusive interview, Musabbeh Al Kaabi, ADNOC’s Executive Director for Low Carbon Solutions and International Growth, says he is hopeful the industry can make a meaningful contribution while supplying the secure and affordable energy needed to meet growing global demand.

Q: Mr. Al Kaabi, how do you plan to spend the extra $8 billion ADNOC has allocated to decarbonization?

Mr. Mussabeh Al Kaabi: Much of our investment focus is on the technologies and systems that will deliver tangible action in eliminating the emissions from our energy production. At 7 kilograms of CO2 per barrel of oil equivalent, the energy we provide is among the least carbon-intensive in our industry, yet we continue to explore every opportunity for further reductions. For example, we are using clean grid power—from renewable and nuclear sources—to meet the needs of our onshore operations. Meanwhile, we are investing almost $4 billion to electrify our offshore production in order to cut our carbon footprint from those operations by up to 50%.

We also see great potential in carbon capture utilization and sequestration (CCUS), especially where emissions are hard to abate. Last year, we doubled our capacity target to 10 million tonnes per annum by 2030. We currently have close to 4 million tonnes in capacity in development or operation and are working with key players in our industry to create a world-leading carbon management platform.

Additionally, we’re developing nature-based solutions to support our target for net zero by 2045. One of our initiatives is to plant 10 million mangroves, which serve as powerful carbon sinks, along our coastline by 2030. We used drone technology to plant 2.5 million mangrove seeds in 2023.

Q: What about renewables?

Mr. Mussabeh Al Kaabi: It’s in everyone’s interests that we invest in the growth of renewables and low-carbon fuels like hydrogen. Through our shareholding in Masdar and Masdar Green Hydrogen, we are tripling our renewable capacity by supporting a growth target of 100 gigawatts by 2030.

Q: We have been talking about hydrogen and carbon capture and storage (CCS) as the energies and solutions of tomorrow for decades. Why haven’t they broken through yet?

Mr. Mussabeh Al Kaabi: Hydrogen and CCS offer great promise, but, like any other transformative technology, they require R&D attention, investment, and scale-up opportunities.

Hydrogen is an abundant and portable fuel that could help reduce emissions from many sectors, including transport and power. Meanwhile, CCS could abate emissions from heavy, energy-intensive industries like steel and cement.

These technologies are proven, and we expect more improvements to allow wider consumer use. We will continue to develop and invest in them, while continuing to responsibly provide our traditional portfolio of low-carbon energy products that the world needs.

Q: Is there any evidence the costs can come down?

Mr. Mussabeh Al Kaabi: Yes, absolutely. The dramatic fall in the price of solar over recent years—an 89% reduction from 2010 to 2022 according to the International Renewable Energy Agency—just goes to show that clean technologies can become viable, mainstream sources of energy if the right policy and investment mechanisms are in place.

Q: Do you favor a particular decarbonization technology?

Mr. Mussabeh Al Kaabi: We don’t have the luxury of picking winners and losers. The scale of the challenge is too great. World economies consume the equivalent of around 250 million barrels of oil, gas, and coal every single day. We are going to need to invest in every viable clean energy and decarbonization technology. If CCS can do it, let’s do it. If renewables can do it, let’s invest in it.

That said, I am especially optimistic about the role artificial intelligence will play in our decarbonization drive. We’ve been implementing AI and machine learning tools across our value chain for many years; they’ve helped us eliminate around a million tonnes of CO2 emissions over the past two years. As AI technology grows at an exponential rate, we will continue to invest in the latest innovations to ensure we provide maximum energy with minimum emissions.

Q: Can traditional energy companies be part of the solution?

Mr. Mussabeh Al Kaabi: They can and they must be part of the solution. Energy companies have the technical capabilities, the project management experience and, crucially, the financial strength to advance solutions. For example, we’re investing in one of the largest integrated carbon capture projects in the Middle East and North Africa, at our gas processing facility in Habshan. Once complete, it will add 1.5 million tonnes of CCUS capacity. We’ve also just announced an investment into Storegga, the lead developer of the UK’s Acorn CCS project in Scotland, marking our first overseas investment of its kind.

Q: What’s your approach to decarbonization investment?

Mr. Mussabeh Al Kaabi: Our approach is to partner with successful developers of economic technologies and to incubate promising climate solutions so ADNOC and other players can use them to accelerate the path to net zero. There are numerous examples.

Last year, we launched the ADNOC Decarbonization Technology Challenge, a global competition that attracted 650 climate tech startups vying for a million-dollar piloting opportunity with us. The winner was Revterra, a Houston-based startup that will pilot its kinetic battery technology with us over the coming months.  

We’re also working to deploy another cutting-edge battery technology that involves taking used electric vehicle batteries and upcycling them into a battery energy storage system, which we’ll use to help decarbonize our remote production activity by up to 25%.

In the northern regions of the UAE, we’re working closely with another startup company to pilot carbon dioxide mineralization technology. It is a project we are all excited about because it presents opportunities for CO2 removal at a significant scale.

Additionally, we are working with leading industry service providers to explore new ways of producing graphene and low-carbon hydrogen.

Q: Finally, how confident are you that transformation will happen?

Mr. Mussabeh Al Kaabi: I am confident.It can be done. Transformation is happening. It won’t happen overnight, and it needs to be just and equitable for the poorest among us, but I am optimistic.We must focus on taking tangible action and not underestimate the power of human innovation. History has shown that, when we come together, we can innovate and act. I am positive that, over time, we will continue to see progress towards our common goal.

This content was produced by ADNOC. It was not written by MIT Technology Review’s editorial staff.


Building a data-driven health-care ecosystem

The application of AI to health-care data has promise to align the U.S. health-care system to quality care and positive health outcomes. But AI for health care hasn’t reached its full capacity.  One reason is the inconsistent quality and integrity of the data that AI depends on. The industry—hospitals, providers, insurers, and administrators—uses diverse systems. The resulting data can be difficult to share because of incompatibility, privacy regulations, and the unstructured nature of much of the data. The data can carry errors, omissions, and duplications, making it difficult to access, analyze, and use. Even the best data can cause data bias: the data used to train AI models can reinforce underrepresentation of historically marginalized populations. The growth of AI in all industries means data quality is increasingly vital.

While AI-driven innovation is still growing, the U.S. continues to spend more than twice as much as the average high-income country for its health care, while its health outcomes are falling: the latest data from the U.S. Center for Disease Control’s National Center for Health Statistics indicates U.S. life expectancy rates dropped for the second year in a row in 2021.

To spark innovation by identifying gaps and pain points in the employer-based health-care system, JPMorgan Chase launched Morgan Health in 2021. Morgan Health’s chief technology officer of corporate responsibility, Tiffany West Polk, says Morgan Health is driven to improve health outcomes, affordability, and equity, with data at its foundation. Gaining insights from large data streams means optimizing analytical platforms and ensuring data remains secure, while also HIPAA and Health Resources and Services Administration (HRSA) compliant, she says.

Currently, Polk says, the U.S. health-care system seems to be “quite stuck” in terms of keeping health-care quality and positive outcomes in line with rising costs.

  • “If you look across the broader U.S. environment in particular, employer sponsored insurance is a huge part of the health-care net for the United States, and employers make significant financial investment to provide health benefits to their employees. It’s one of the main things that people look at when they’re looking across an employer landscape and thinking about who they want to work for.”

Investing in new ways to provide health care

Nearly 160 million people in the U.S. have employer-sponsored health insurance as of 2022, according to health-care policy research non-profit KFF (formerly the Kaiser Family Foundation). JPMorgan Chase launched Morgan Health because of its focus on improving employer-sponsored health care, not least for its 165,000 employees.

Morgan Health has invested $130 million in capital during the past 18-plus months in five innovative health-care companies: advanced primary care provider Vera Whole Health; health-care data analytics specialist Embold Health; Kindbody, a fertility clinic network and global family-building benefits provider; LetsGetChecked, which creates home-monitoring clinical tools; and Centivo, which provides health care plans for self-insured employers.

All of these companies offer new approaches to conventional employer-sponsored health care to deliver a higher standard of care. Morgan Health’s collaboration with these enterprises will examine how these change patient outcomes, health-care equity, and affordability, and how to scale their successes.

“Many Americans today face real barriers to receiving high-quality, affordable, and equitable health care, even with employer-sponsored insurance,” Polk says. This calls for breaking the paradigm of delivery-incentivized health care, she says, which rewards providers for delivering services, but pays insufficient attention to outcomes.  

  • “We have a model today where our health-care providers are incentivized based on the number of patients they see or the number of services they perform. What that means is that they’re not incentivized based on improvements, patient’s health, and wellbeing. And so when you have a model that thinks volume versus value, those challenges then serve to compound the disparities that we have. And that then also means that those who have employer-sponsored insurance are also similarly challenged.”

For Morgan Health, AI and machine learning (ML) will be a key to problem-solving with health-care technology, Polk says. AI is ubiquitous across industries, and is the go-to when we think about innovation, she says, but the hype can mean we forget about the importance of data accessibility and quality.

Polk says solving this data challenge makes this an exciting and transformational time to be a chief technology officer and a technologist. The next stage of evolution in health care can’t proceed without better data, Polk says, and this is what the data and analytics team at Morgan Health are addressing.

  • “[AI] has become so ubiquitous in terms of how we think about everything. And we think that it is the thing that’s going to fix anything and everything in technology. And it has become so ubiquitous and so the go-to when you think about innovation, that I think that sometimes, there’s this way in which people kind of forget about what AI actually is underneath the covers.”

Garnering data-based insights

To address the strength of health-care data, the industry is moving increasingly toward standard electronic health-care records (EHRs) for patients. A 2023 Deloitte study says use of EHRs and health information exchanges (HIEs) is growing rapidly, with organizations building data lakes and using AI to combine and cleanse data. These measures provide a “strong digital backbone” for building connections between hospitals, primary care centers, and payment tools, the study says, and this should help reduce errors, unnecessary readmissions, and duplicate testing.

The U.S. Department of Health and Human Services (HHS) is also building a network for digital connection in the health-care industry, to allow data to flow among multiple providers and geographies. Its Office of the National Coordinator for Health Information Technology (ONC) announced in December 2023 that its national health data exchange—the Trusted Exchange Framework and Common Agreement (TEFCA)—is operational. The exchange connects Quality Health Care Information Networks, which it certifies and onboards, with standard policies and technical requirements.

Polk says Morgan Health is improving foundations to incentivize better outcomes for patients. Morgan Health’s work can create standards—grounded in data—that incentivize better performance, which can then be shared across the employer-sponsored insurance network, and among broader communities. Using AI features such as metadata tagging (algorithms that can group and label data that has a common purpose), she says, “is one way health-care companies can simplify tasks and open up more time for providing care.”

  • “If you do your data ingestion right, if you cleanse your data right, if you make sure that your metadata tagging is correct, and then you are very aware of the way in which your algorithms have been biased in the past, you can be aware of that so that you can make sure that your algorithms are inclusive moving forward.”

“I think the most important thing is incentivizing our health-care partners who provide for our employees to meaningfully improve health-care quality, equity, and affordability through incentivizing outcomes, not incentivizing volume, not incentivizing visits, but really incentivizing outcomes,” Polk says.

This article is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

❌
❌