Via XXV Aprile, 24A, 20871 Vimercate MB, Italy

AI and technical professions: what changes with the new legislation and how Non-Destructive Testing is evolving

Cantiere edile con tecnico che utilizza un tablet e droni dotati di AI per il rilievo digitale e la diagnostica avanzata.

The question is increasingly common: “If the engineering firm I hire uses artificial intelligence, can I trust them?”
It’s a legitimate question, because AI has now become an everyday tool even for those who design buildings, evaluate structures, or manage construction sites. But like any powerful technology, it raises doubts, expectations, and some fears.

In recent months, however, something has changed.
A regulatory framework has been introduced that brings order, clarifies responsibilities and, above all, protects those who commission work. It is a step that many have been waiting for: a way to understand how far technology can go and where, instead, human intervention must remain firm.

Here you can check the non-destructive tests that Teknoprogetti carries out in the construction industry.

A new scope: what the law on AI in technical professions really says

Law 132/2025, “Provisions and delegated powers to the Government regarding Artificial Intelligence,” is the first comprehensive attempt to regulate the use of AI in various sectors, including the intellectual professions.
The underlying principle is simple:
AI is a tool, not a substitute.

And from this derive three fundamental pillars:

  1. Mandatory information for customers

The professional must clearly state before starting work whether and how they will use AI-based tools.

  1. AI cannot make decisions

Algorithms can support calculations, analyses, simulations… but they cannot determine a design choice on their own.

  1. The responsibility always remains with the engineer.

No technological shortcuts: errors, inaccuracies, and assessments remain the responsibility of humans, not machines.

In short, the law establishes a key principle: technology can help, but it cannot “sign.”

Article 13: the core of protection for those who entrust work to others

Within Law 132/2025, Article 13 is the compass for anyone commissioning a project.
It establishes that:

  • the engineer must declare the use of AI;
  • AI cannot replace technical judgment;
  • human supervision is mandatory at every stage;
  • Professional responsibility cannot be delegated.

This is a step that clears up any misunderstandings: AI is not a “shortcut” or a gray area. It is a tool that works under the control of professionals. It is also a cultural shift that involves the responsibility of those who can use innovative tools to collect data, make assessments, and develop variations, but who ultimately must enable those who need to make decisions to do so.

CNI Circular 343/2025: everyday practice explained to technicians

Alongside the law, the National Council of Engineers published Circular 343/2025, a document that translates the regulation into actual procedures.

The CNI has made available:

  • two official information forms (for private customers and public administrations);
  • guidelines on how to communicate the use of AI;
  • guidelines on service quality, so that the use of AI does not reduce, but rather increases, the accuracy of the work.

The publication of Circular 343/2025 is not a bureaucratic detail, but a decisive step.
At a historic moment when artificial intelligence is making a strong entry into the work of engineers, the CNI has chosen not to simply observe: it is taking a stand, interpreting the regulation and, above all, translating it into concrete actions.

It is a significant gesture of responsibility that goes well beyond a simple “technical note.”
With this circular, in fact, the trade association:

  • provides guidance to professionals, helping them navigate a new and complex technological landscape;
  • harmonizes practices among technical firms, preventing each one from regulating itself in its own way;
  • raises the bar for quality, explaining how the use of AI must be consistent with professional ethics;
  • strengthens client protection by providing clear information templates and a shared vocabulary.

In other words, the CNI plays a role that only a trade association can play: acting as a bridge between regulations and the profession, between innovation and safety.
It does not simply say “what is mandatory,” but indicates how to apply it without distorting the nature of engineering work.

The importance of this stance can be seen above all on two levels.

1. Credibility with clients

When the CNI speaks, it speaks for all engineers:
professionals are not improvising, but following nationally recognized guidelines.

This builds trust, especially in a field—AI—where transparency is essential for those commissioning a project.

2. Protecting the role of engineers in the age of AI

The circular recognizes that artificial intelligence must be an advanced tool, not a substitute or an uncontrolled automatism.
This position has political and cultural weight: it serves to prevent technological advancement from being perceived as a risk to human labor or, worse, as a shortcut that reduces the quality of performance.

The circular clearly states that AI can improve projects, but only if guided by those with the right skills, responsibilities, and training.
This highlights an often overlooked but fundamental value: technical expertise is not optional.

Why all this? To protect the client.

The aim of the legislation is not to stifle innovation. On the contrary, it is to create a secure environment in which innovation can flourish.

In concrete terms, those who commission a project obtain four fundamental guarantees:

  1. Clear responsibility

The engineer remains responsible for decisions, checks, and results.

  1. Transparency on instruments

The customer knows in advance which technologies will be used and for what purpose.

  1. Higher quality

AI speeds up processes, enables more accurate checks, and facilitates the evaluation of alternatives.

  1. Protection in case of problems

The disclosure becomes an official document, which is also useful in the event of any disputes.

It is a pact of clarity: the professional explains, the client understands, and both work with greater awareness.

What should a good AI policy contain?

The document must be simple, understandable, and complete.
The required elements are:

  • which AI tools are used;
  • at which stages of the project they are involved;
  • what limitations they have;
  • which aspects remain under human control;
  • who takes responsibility for decisions;
  • signature of the technician in charge.

Essentially, a sheet that describes how the work will be carried out and with what tools.

AI and profession: an unchanging balance

The legislation strongly emphasizes a concept that is often forgotten in technological discourse:
artificial intelligence does not think, sign, or decide.

Can:

  • analyze variants,
  • manage large amounts of data,
  • accelerate simulations,
  • identify design alternatives.

But the human role remains central:

  • checks,
  • interpretations,
  • technical choices,
  • accepting responsibility.

In other words: AI does not replace the profession of engineer, it enhances it.

And for the client? The advantages are tangible.

When used correctly and transparently, AI brings immediate benefits:

  • Greater accuracy

More in-depth analysis and fewer errors in complex steps.

  • Reduced times

Automated repetitive tasks, faster simulations, smoother processes.

  • More design alternatives

AI compares variants and scenarios quickly and objectively.

  • Better cost optimization

Materials, timing, energy consumption: everything can be calibrated more precisely.

The result is a clearer, more robust, and more cost-effective project.

The conclusion? More clarity, more guarantees, more quality.

The new legislation does not restrict the use of AI: it makes it safe.
It establishes boundaries, responsibilities, and communication methods.

For those who entrust a task to an engineer, this means:

  • no unsupervised automatic decisions,
  • no loss of warranty,
  • more awareness,
  • more precision,
  • overall quality of the project.

AI accelerates, expands, and strengthens the work of technicians.
The law ensures that this happens with the necessary transparency.

It is the best way to combine innovation and trust.

Rilievo strutturale con tracciamenti laser in un edificio industriale, rappresentazione dei controlli non distruttivi evoluti supportati da AI

Who would really benefit?

Despite the technological premises, the most serious question remains: who would benefit from this leap forward?

For Teknoprogetti

  • working times reduced by up to 60%;
  • greater consistency between tests;
  • more comprehensive documentation without time overloads;
  • ability to offer predictive services and not just diagnostic ones.

For the client

  • less uncertainty and more objective data;
  • clear comparison between variants and intervention scenarios;
  • potential savings thanks to predictive maintenance;
  • greater transparency on costs, limits, benefits, and responsibilities.

In other words, a stronger and more informed relationship between client and professional.
This is precisely the objective for which the legislation was written: to open the door to innovation without sacrificing trust and responsibility.

Conclusion: a bridge between protection and innovation

AI will not replace engineers, nor will it ever be able to do so: this is confirmed by law, professional ethics, and common sense.
However, it could make their work more effective, more transparent, and more accessible to clients.

When Teknoprogetti decides to integrate these technologies, it will not only position itself at the forefront, but will also be able to offer a richer, faster, and clearer service.

And what about the client?
They will receive exactly what the legislation promises: more guarantees, more information, and higher quality.

A new balance, where innovation and responsibility are not mutually exclusive, but reinforce each other.

Frequently Asked Questions about AI and Non-Destructive Testing

Does the new Law 132/2025 prohibit the use of AI in technical projects?

No, the law does not prohibit AI: it states that it can be used as a support, but cannot replace the technical judgment of the engineer.

Should engineers disclose to clients whether they use AI-based tools?

Yes. The client must be informed, and the information provided must explain which tools are used and their limitations.

Who is responsible if AI makes an analysis error?

Responsibility always remains with the professional in charge. The AI cannot sign, decide, or assume responsibility.

How can AI improve Non-Destructive Testing?

It can standardize test data, recognize degradation patterns, generate 3D maps, and speed up analysis and predictive scenarios.

Does AI reduce NDT times?

Yes, in many cases it can speed up data collection and analysis by up to 50–60%, while maintaining technical supervision.

Share The Post:

Related Posts