New technologies typically raise a host of concerns, and generative artificial intelligence (AI) is no exception. 

In addition to reservations about the nascent technology’s fallibility, some worry about how the technology could be used, abused and misused. Companies are setting guidelines about the use of the technology, and the government is working with the industry to devise regulations for the developing technology. As acceptance and use of generative AI advances, one of the biggest questions users have about the technology is, “What’s this going to do to my job?”

AI hallucinations

Chat GPT has a penchant for returning bad information, often referred to as hallucinations.

Such mistakes are particularly problematic because people tend to trust computers, Bill Braun, CIO at Chevron, said.  

Fallibility of Generative AI Is Only One Concern about the Technology
“People know that humans can be wrong. People have grown to expect that computers are always right, and now we're seeing the computer might not be right.” Bill Braun, CIO, Chevron (Source: Chevron)

“People know that humans can be wrong. People have grown to expect that computers are always right, and now we're seeing the computer might not be right,” he said. 

Moe Tanabian, chief product officer at Cognite, said generative AI responses often “feel and look real” but are not always accurate. 

“When they don't know the answer, they just make things up,” he said.

Manoj Saxena, founder of the Responsible AI Institute (RAI Institute), said the large language models driving generative AI are massive pattern detection machines.

“When you type into a Google search, it starts giving you the next predictive word. Imagine doing that predictive word across all of humanity's knowledge,” he said. “It is confidently telling you what it should look like, the answer, not what the answer is, and that's what hallucination is.”

Beyond the issue of whether to trust a response from generative AI is the issue of legal uncertainty.

Sriram Srinivasan, senior vice president for Halliburton Global Technology, said the service company is looking into the possibility of using GitHub Copilot, a tool that autocompletes code, in a project to rewrite legacy programming in modern languages. 

AI-assisted programming can help from a productivity perspective, but it may be problematic from a legal perspective.

“One of the things that we are being mindful about are issues around patentability and issues around copyright-ability. The question of AI-generated content being patentable is unsettled in the U.S. for sure — probably everywhere — so we have to be very careful in deciding where and how we want to use AI-generated code snippets,” he said.

Generative AI poses other legal liability possibilities.

James Brady, chief digital officer for Baker Hughes oilfield services and equipment, said current generative tools are simply providing responses without detailing source material.

“Say that we turn Chat GPT loose on the SPE [Society of Petroleum Engineers] papers database, and then we ask it, ‘What's the best way to do a multistage frac in the Bakken?’” he said. 

The resulting recommendation may well violate another company’s patent, he said.

“It might not come out and say, ‘Watch out, this is patented.’ It might just, from the assembled data, just say, I suggest that you do it in this way.’” 

Mehdi Miremadi, a senior partner at McKinsey & Co., said the use of generative AI brings up concerns about cyber security, privacy and security.

Fallibility of Generative AI Is Only One Concern about the Technology
Inaccuracy and security were top concerns related to using generative AI, according to respondents in McKinsey’s “The state of AI in 2023: Generative AI’s breakout year” report, released in early August. (Source: McKinsey & Co.)

Data “are incredibly sensitive assets. If you are opening them to these models, you're also opening significantly broadened potential access to this data,” he said. “How do you ensure that this stays in the right hands?”

In fact, in McKinsey’s “The state of AI in 2023: Generative AI’s breakout year” report released in early August, respondents were almost as concerned about generative AI and cybersecurity, at 53%, as they were about inaccuracy with generative AI, at 56%. 

Fallibility of Generative AI Is Only One Concern about the Technology
Generative AI complicates the framework around using AI responsibly. (Source: Responsible AI Institute)

Because there are concerns about generative AI tools, companies are instituting guidelines about how generative AI technology offerings like Chat GPT can be used while others are prohibiting it altogether. 

Saxena said when employees use Chat GPT, they're exposing companies to risks because data they put into the program “goes right into OpenAI and Microsoft's hands.” The RAI Institute provides diagnostics of how employees are using Chat GPT and offers guidelines and playbooks on how to use generative AI in a responsible way, he said.

Braun said Chevron is working to help its workforce understand how to approach and use new technologies like generative AI. 

“We put in a speed bump before you go to that site that describes our expectations for safe use, and then, click that you acknowledge,” he said. “We tried to describe in five or six bullet points what are the key things in terms of how to use it to make sure you're using it the right way.” 

He offered a parallel to a commonly used technology: employees have access to email, which raises cybersecurity concerns about them clicking links they shouldn’t.

“We have to help the workforce understand how the threats of the technology continue to evolve,” he said.

Chevron joined the RAI Institute and is learning from other members because it didn’t want to try to navigate the world of AI on its own.

“The group comes together and can share practices and set expectations and shape what those guidelines look like, we think, is the best way to do that,” Braun said. 

Baker Hughes temporarily blocked Chat GPT from its network, Brady said. The risk with Chat GPT, he said, is that it’s open and not secure.

“When you put something out there, you're effectively taking it outside the network,” he said, and Baker Hughes wanted to ensure people weren't unconsciously putting confidential information into the internet. “We've done some raise-awareness stuff to communicate why we blocked it and are taking steps to re-introduce it for specific initiatives in controlled ways.”
 

The ‘Chernobyl of AI’

One of the things that makes generative AI tricky is that the new technology is developing rapidly with few mechanisms in place to regulate it. 

“Regulations have not caught up to this, and there is a real potential of a lot of damage being done over the next few years,” Saxena said. “I call it the Chernobyl of AI, where things are going to blow up and create a lot of damage.” 

He said the industry needs to think about safety first. 

“It's like everyone's focused on building the nuclear reactor and no one's thinking of putting the safety dome on top of it,” he said. “I think now the time has come where people are saying, ‘Hey, this is not just nice to have. This is something we must have.’”

Vasi Philomin, vice president and general manager for Generative AI at Amazon Web Services (AWS), said the company is working with policymakers and standards bodies to help shape regulations, standards and recommended best practices for AI and generative AI.

“We work with a lot of standards bodies to also help with the regulation to shape it, because I think it's important also for the regulators to work with industry to understand what is possible and what is not,” he said.

Cognite’s Tanabian said there is a push for regulations requiring “watermarking,” or labeling of generative content as such.

Fallibility of Generative AI Is Only One Concern about the Technology
“Generative AI models sometimes feel so real that it opens a lot of doors for malicious actors to abuse the system.” Moe Tanabian, chief product officer, Cognite (Source: Cognite)

“Generative AI models sometimes feel so real that it opens a lot of doors for malicious actors to abuse the system,” he said. 

For example, with just a snippet of a person’s voice, a generative AI model can replicate audio indistinguishable from that person’s voice and anyone who knows that person would likely believe the message, he said.

“There is now a huge vacuum for governments to step in and create some set of regulations to authenticate and determine the authenticity of these contents. Is it coming from a generative AI model that is basically being used by a malicious actor, or is it actually real?” he said. 

People and companies

Despite the concerns that generative AI raises, it brings opportunity to businesses and employees.

SparkCognition CTO Sridhar Sudarsan said the sooner one embraces the generative AI journey, the better. He also suggested that companies that think about using generative AI holistically, rather than piecemeal or incrementally, will achieve greater results from it. 

“The more holistically you get on, the better it will be,” he said. 

Miremadi said to achieve success using AI and generative AI, companies need “translators” who deeply understand their own craft and industry, but also the data science, algorithms and models.

“They're extremely critical because if you just go and bring a number of Ph.D.s who just know how to run these algorithms, you have a real gap in how you link them with the rest of the organization and how you actually integrate their skillset in the organization,” he said.

Fallibility of Generative AI Is Only One Concern about the Technology McKinsey Reskill: Companies that are using more AI expect to reskill more of their workforce soon when compared to those at companies not using AI as widely, according to respondents in McKinsey’s “The state of AI in 2023: Generative AI’s breakout year” report, released in early August. (Source: McKinsey & Co.)
Companies that are using more AI expect to reskill more of their workforce soon when compared to those at companies not using AI as widely, according to respondents in McKinsey’s “The state of AI in 2023: Generative AI’s breakout year” report, released in early August. (Source: McKinsey & Co.)

He said data sciences training is already trending up, and more data scientists will be hired to help with AI and generative AI projects.

On the flip side, he said, there is a general hypothesis that the occupations most at risk of being wiped out by AI and generative AI technologies are composed of many repetitive tasks. He does not believe AI applications will significantly negatively impact the oil industry’s workforce but anticipates some work being delegated to the machine while new tasks and responsibilities will fall to humans.

“I would think there will be, first of all, significant collaboration between humans and AI,” Miremadi said. “Second, there will be a difference in the types of tasks that folks do on a day-to-day basis.”

RAI Institute’s Saxena said workers who are concerned about their jobs should consider upskilling themselves to remain relevant and to thrive in the changing environment. First, he said, workers should educate themselves on what generative AI is and is not, what is possible with it and what risks accompany the use of the technology. 

Second, he said, they should take on a small project. The institute offers a testing area where people can try projects in a low-risk setting. In that way, they can start activating their skills to understand the basic concepts and start using generative AI, not as competition, but as a collaborator, he said.

“A lot of people just look at this as, ‘Oh, it's going to take away my job.’ So it's man versus a machine. It's not just that,” Saxena said. “While there will be many tasks and some jobs that will indeed be automated away, the greatest potential here is in using AI as a co-creator. It's man and machine.” 


Editor’s note: This is the fourth part of a multi-part series examining the use of artificial intelligence in the oil patch. Read parts one, two and three here.