spk-logo-white-text-short2
0%
1-888-310-4540 (main) / 1-888-707-6150 (support) info@spkaa.com
Select Page

AI Governance Boards Explained: How Smart Companies Approve Tools Like Copilot, Rovo, and Duo

Written by Mike Solinap
Published on March 16, 2026

Introduction

Hello and welcome to this SPK and Associates vlog. My name is Michael Roberts. I’m Vice President of Sales and Marketing here at SPK and Associates.

Artificial intelligence is quickly becoming embedded in engineering tools specifically, and even in the ones for IT and other departments. It is becoming part of their everyday use, from platform development tools like GitLab to collaboration environments like Atlassian and Microsoft.

But as organizations rush to adopt AI capabilities like Copilot, Robo, or GitLab Duo, many are discovering that governance, security, and data protection need to keep up with innovation. Without the right oversight, these tools can increase risk around intellectual property, compliance, and unintended data exposure.

So in today’s vlog, I’m joined by Mike Solinap.

Mike, thanks for joining me today. Please introduce yourself.

Overview of AI Governance Boards

We’re here to talk about how smart companies are establishing AI governance boards to responsibly evaluate and approve AI tools across their technology ecosystems, not just engineering and IT. We’re going to talk about most of those areas today.

Mike works with organizations every day on cloud infrastructure, security, and infrastructure strategy. Mike will bring a lot of practical perspective to how companies are safely adopting AI without slowing down innovation. We don’t want to slow things down.

So Mike, when organizations are evaluating AI tools like Copilot, Robo, or GitLab Duo, what are the biggest security and data governance risks that they overlook? And how should an AI governance board assess those during the onboarding process for these AI tools?

Security and Data Governance Risks

Sure. When companies begin evaluating AI tools, they usually focus on things like model accuracy or productivity gains, or they simply look at compliance statements made by the tool.

But it’s really important to focus on the things you mentioned, which are security and data governance.

The top things that companies overlook in terms of security and data governance are, in my mind, number one: data oversharing.

AI tools, as you mentioned, are intended to reach into many systems and aggregate all that data. You’ll typically have things like knowledge bases, emails, files, maybe even chats.

Users might technically have access to that data already, but they might not discover it manually. It might be rare for them to discover that information.

If your permissions are poorly configured and you aggregate that data, you could be exposing your AI tools and bringing to light sensitive documents that users might otherwise never see.

So that’s one risk.

Number two is prompt injection, typically coupled with data exfiltration.

Consider the following example. Let’s say a malicious user who has access to a system uploads a document into one of the file systems. That document instructs the AI to search internal systems, summarize sensitive information, and then send it out to an external party or email it.

That’s another risk.

Number three is unmanaged AI usage.

I’ve got a couple of interesting stats here. According to TechRadar, up to 78% of workers introduce their own AI tools without any oversight. I’m sure many of you are guilty of doing that.

Additionally, employees frequently paste sensitive data into public AI tools. We’re talking about code snippets, product design documents, and maybe even sensitive photos.

Gartner predicts that 40% of enterprises will have unmanaged AI tool breaches by the year 2030.

That’s a big risk.

Finally, there’s the topic of increased tool security risk.

What I mean by that is when you implement an AI tool, you’re also extending your system. That means you’ve got new additional plugins, integrations, and MCP servers.

Your typical layered approach to security is going to need to extend into those new pieces of infrastructure, and that often gets overlooked.

Real-World Risk Example

That really concerns me because I think of a scenario I heard last night where sensitive internal pricing information that isn’t public was put into ChatGPT.

Now the model knows what that company’s pricing model is. That’s a huge risk.

So from a security and compliance perspective, what criteria should AI governance boards include in their approval process to ensure that these tools don’t introduce sensitive pricing leaks, IP leakage, model poisoning, or regulatory exposure?

What do they need to do there?

Core Criteria for AI Governance Boards

There are some core criteria that a governance board needs to evaluate.

The biggest thing is number one: data classification and access.

As a board, you need to determine exactly what data the AI tool can access and how it’s classified.

What data sources does the AI ingest? Is it product software code, hardware design files, or something like HR documents?

We also need to ask whether that data is part of a regulated data set, because that’s a really big consideration depending on what industry you’re in.

Number two is training model and data usage.

Boards need to ensure that company data isn’t being used to train or improve external models. The example you gave was a great one, because companies could give away their competitive advantage.

They need to ask questions like:

  • Are all the prompts being logged?
  • Are those prompts being used as part of the model training?
  • Where are your vector databases being stored?
  • What are the data retention limits?

Those can be major considerations.

Earlier we talked about prompt injection.

As a board, you need to verify that prompts are isolated and sanitized. You also need to ensure that content has the ability to be filtered if necessary as part of the governance model.

Finally, number four, and I think this is a very important one, is the identity and privilege model.

What I mean by that is: does the tool support scoped service accounts where it operates with the least amount of privilege?

In terms of APIs, does it use short-lived tokens that get rotated frequently?

From a networking perspective, does it enforce a zero-trust environment?

Balancing Innovation and Security

I hate that this is such a hyper-complex process layered on top of innovation. But the outcomes you talked about earlier are so potentially threatening to companies that they really need some level of approval process around these tools and where the data can go, and what outcomes could happen related to that data.

So really good information there.

AI Integration with Engineering Tools

Mike, we work with a lot of product teams that use different tooling. Some in the physical product realm use PLM and CAD systems, and some in the software realm.

How do you see AI features in those tools like Robo Agents, GitLab Duo, or Azure Copilot fitting into existing product tools like PLM and CAD tools, and even DevSecOps pipelines?

How do you see those features fitting into those systems, and what role should an AI governance board play in deciding where and how these tools integrate?

AI as an Interface for Complex Systems

In my experience supporting engineering and CAD tools and ALM and PLM tools for many years now, what we’ve found is that a lot of those tools are very difficult to use. They’re not very user-friendly, and they differ from one another.

For example, there are many situations where companies will switch from one tool to another to provide the same functionality.

There’s some relearning involved, not only from the user’s perspective but also from an administrative perspective.

Where I see these AI features coming into place is allowing natural language to interface with these very complex tools.

For example, in requirements management, someone could simply say, “Get me the requirement for this particular product,” or “Create a new requirement in the system based off these specifications.”

Treating AI Like an Intern

In terms of an AI governance board and the role they should play in deciding where and how these tools integrate, one concept our team likes to use is treating the AI tool like an intern.

They have some knowledge about the tool and the process, but they don’t necessarily have the full experience, and you may not want to fully trust them.

What we could do is create a concept of a staging area where the AI tool can write data into your application, but there are checks and balances and a review done by a human before that data makes its way into production.

Closing Discussion

I like the intern comparison. We’ve all worked with interns. They’re awesome, but at the same time they may not know everything or have all the context.

But they do a lot of the heavy lifting. So why not treat it like that?

Mike, thank you so much for sharing your insights on how organizations should think about AI governance. I really appreciate your time.

No problem.

Final Takeaways

As more AI capabilities become embedded into tools like Robo, GitLab Duo, Copilot, and many others, it’s clear that companies need a structured approach for evaluating risk, protecting intellectual property, and ensuring that these technologies actually improve productivity rather than introduce new challenges.

For organizations looking to adopt AI responsibly, establishing an AI governance board is a really critical step in aligning innovation with security, compliance, and long-term business value.

If your organization is exploring how AI can operate within your engineering, DevOps, or product development environment and wants to ensure you have the right guidance to evaluate and implement these tools safely, our team at SPK and Associates would be happy to help.

We have a new product called the AI Launchpad, which talks about all of this, including AI governance boards and building better products with AI.

Outro

Thanks for watching.

If you found this discussion helpful, be sure to follow the SPK and Associates YouTube channel for more insights on AI, DevSecOps, cloud infrastructure, and modern engineering tools.

We’ll see you in the next vlog. Thanks.

 

Latest White Papers

Consolidate with Creo

Consolidate with Creo

CAD engineers working across multiple systems can lead to collaboration issues and data sprawl. Discover how consolidating on one platform, such as PTC Creo, prevents unnecessary converting and importing. What You Will Learn In this eBook you will learn: The benefits...

Related Resources

Be AI Ready: Microsoft Azure Accelerates Innovation

Be AI Ready: Microsoft Azure Accelerates Innovation

Organizations across every industry are racing to adopt artificial intelligence.  However, many struggle with the infrastructure required to support it.  Legacy systems, fragmented data environments, and limited scalability can make it difficult to deploy AI solutions...

Unifying MBSE and Software Development: GitLab Duo + SysML v2

Unifying MBSE and Software Development: GitLab Duo + SysML v2

Modern products such as automobiles, aircraft, and industrial equipment rely heavily on embedded software.  This software (that may be running on electronic control units, sensors, and microcontrollers) must integrate tightly with the physical system architecture...