The U.S. is pushing forward its AI dominance under the motto “Winning the AI Race.” The global race for technological AI leadership does not leave Swiss companies unaffected. The question is: how well do companies protect what defines their future?
Thomas Fürling, CEO and founder, e3. (Source: zVg)
In July 2025, the U.S. government presented its AI strategy. A document that is more than just an economic program: it is the blueprint for a new global power structure. “Winning the AI Race” in a nutshell means: deregulate, invest, dominate. Innovation is no longer seen as a business goal, but as a geopolitical instrument. State-coordinated data centers, export offensives, national AI systems – all this is meant to bring the U.S. to the technological top. But not only governments and companies are investing in AI growth: cybercriminals are also enriching their business models with AI. Without rules, and with sheer aggression.
What does this development mean for companies that cannot escape this pace and logic, but have to operate under entirely different conditions?
Efficiency meets risk: AI as a security factor
For every company, AI opens up enormous efficiency gains. But it is precisely the new risks that must be considered. AI generates language, automates processes, makes decisions, and operates deep within sensitive zones: in customer dialogue, in proprietary datasets, in security-critical networks. The more AI is integrated into business models, the more vulnerable structures become that were previously considered secure.
This means: in a world where deepfakes, adaptive malware, and synthetic identities are part of attackers’ standard repertoire, traditional security mechanisms are no longer sufficient. Yet in many digital strategies, security still remains a secondary thought – often reduced to compliance. Companies attempt to manage it with complex contracts or cyber insurance. But those who do not protect their data, their models, and their know-how risk not only their reputation in the AI era, but also their competitiveness. Contracts and insurance help little at that point.
2035: Digital polarization looms
Within ten years, artificial superintelligence will be available to everyone. It will surpass humans in many areas and fundamentally change the social and economic order. The U.S. is betting on speed and capital, Europe on regulation and fundamental rights. Switzerland stands in between – with strong know-how, but a small market.
The risk: being crushed between these currents. The potential: using Swissness and neutrality as a competitive advantage. The prerequisite is to drive AI innovation through the development of super-intelligent security architectures that are recognized internationally as reliable.
This requires technical solutions: DLP systems that block sensitive data before it leaks, strong encryption at all levels, or proprietary AI models that remain under control. But it also requires principles, processes, and culture. It must be clear: security is already the decisive factor in every AI and business strategy. Because the future does not belong to the fastest, but to those who consider security from the start – and implement it consistently.
“AI relieves, but does not replace the human view of risks.”
Interview with Thomas Fürling, CEO and founder of e3
Where is the greater risk today: in AI attacks or in careless use of AI within companies?
Thomas Fürling: At present, phishing is probably the biggest problem. The National Cyber Security Centre recorded 975,309 reports in 2024 – an increase of 108 percent over the previous year, with the trend still rising. The attacks are often so good that even trained employees do not immediately recognize them. Data loss through careless AI input is another real risk, especially because it is often unclear how (confidentially) these systems handle data. Incidentally, the U.S. government announced in July 2025 that AI models are allowed to use copyrighted content without a license.
How can companies prevent data leaks through employee use of AI?
Companies must train employees specifically on which data are considered sensitive and which tools may be used. Since AI increasingly runs in the background, data sharing sometimes also happens unconsciously. Data Loss Prevention (DLP) can monitor and control the outflow at predefined boundaries – for example, with prompts.
It becomes critical as soon as AI starts communicating with AI: then complete loss of control threatens. My gut tells me that won’t take long. That’s why clear governance is essential: should AI be fundamentally restricted, or do we accept the residual risk?
How can companies meaningfully integrate AI into their security strategy without losing control?
Quite clearly with DLP. This way, companies can also use cloud-based AI securely. The prerequisite is that DLP reliably blocks the outflow of sensitive data – even in linked cloud storage with AI access. The safest option is a local AI model, which is particularly interesting for software development. In that case, even confidential source code can be processed, improving result quality with full control.
How do AI-based tools help in handling security incidents?
The decisive advantage of AI lies in pattern, correlation, and context recognition. It noticeably relieves teams in the time-consuming incident analysis. Crucial is that it is trained with the company’s own decision-makers’ data. Only then does it understand the company’s risk behavior and can act accordingly. One example is the use of AI in the regulatory environment. We support companies in implementing AI-driven DLP systems that automatically recognize when employees enter sensitive data into AI tools such as ChatGPT. They deliver directly usable reports that make compliance requirements efficient and transparent to fulfill.
How do IT departments implement information protection as an intelligent, continuous process in practice?
Information protection must be carried at the leadership level, because unfortunately it does not always have top priority in IT. Weaknesses must be openly analyzed and risks assessed transparently and with a future-oriented view – retrospective assessments fall short. Whether the resulting risk is acceptable should not be decided by the IT department. Sustainable information protection requires continuous improvement – it is not a one-time investment. AI attacks, the professionalization of cybercrime, and geopolitical uncertainties constantly change the risk landscape. A clear framework is needed, as well as the courage to intervene decisively and the flexibility to quickly adapt to new threats.
“When AI takes over, trust is misplaced – control, however, is indispensable.”
Thomas Fürling, CEO and founder, e3
