AI News, Author at AI News https://www.artificialintelligence-news.com Artificial Intelligence News Mon, 28 Apr 2025 14:26:29 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png AI News, Author at AI News https://www.artificialintelligence-news.com 32 32 AI strategies for cybersecurity press releases that get coverage https://www.artificialintelligence-news.com/news/ai-strategies-for-cybersecurity-press-releases-that-get-coverage/ https://www.artificialintelligence-news.com/news/ai-strategies-for-cybersecurity-press-releases-that-get-coverage/#respond Mon, 28 Apr 2025 14:26:27 +0000 https://www.artificialintelligence-news.com/?p=106172 If you’ve ever tried to get your cybersecurity news picked up by media outlets, you’ll know just how much of a challenge (and how disheartening) it can be. You pour hours into what you think is an excellent announcement about your new security tool, threat research, or vulnerability discovery, only to watch it disappear into […]

The post AI strategies for cybersecurity press releases that get coverage appeared first on AI News.

]]>
If you’ve ever tried to get your cybersecurity news picked up by media outlets, you’ll know just how much of a challenge (and how disheartening) it can be. You pour hours into what you think is an excellent announcement about your new security tool, threat research, or vulnerability discovery, only to watch it disappear into journalists’ overflowing inboxes without a trace.

The cyber PR space is brutally competitive. Reporters at top publications receive tens, if not hundreds, of pitches each day, and they have no choice but to be highly selective about which releases they choose to cover and which to discard. Your challenge then isn’t just creating a good press release, it’s making one that grabs attention and stands out in an industry drowning in technical jargon and “revolutionary” solutions.

Why most cybersecurity press releases fall flat

Let’s first look at some of the main reasons why many cyber press releases fail:

  • They’re too complex from the start, losing non-technical reporters
  • They bury the actual news under corporate marketing speak.
  • They focus on product features rather than the real-world impact or problems they solve.
  • They lack credible data or specific research findings that journalists can cite as support.

Most of these problems have one main theme: Journalists aren’t interested in promoting your product or your business. They are looking after their interests and seeking newsworthy stories their audiences care about. Keep this in mind and make their job easier by showing them exactly why your announcement matters.

Learning how to write a cybersecurity press release

What does a well-written press release look like? Alongside the reasons listed above, many companies make the mistake of submitting poorly formatted releases that journalists will be unlikely to spend time reading.

It’s worth learning how to write a cybersecurity press release properly, including the preferred structure (headline, subheader, opening paragraph, boilerplate, etc). And, be sure to review some examples of high-quality press releases as well.

AI strategies that transform your press release process

Let’s examine how AI tools can significantly enhance your cyber PR at every stage.

1. Research Enhancement

Use AI tools to track media coverage patterns and identify emerging trends in cybersecurity news. You can analyse which types of security stories gain traction, and this can help you position your announcement in that context.

Another idea is to use LLMs (like Google’s Gemini or OpenAI’s ChatGPT) to analyse hundreds of successful cybersecurity press releases in a niche similar to yours. Ask it to identify common elements in those that generated significant coverage, and then use these same features in your cyber PR efforts.

To take this a step further, AI-powered sentiment analysis can help you understand how different audience segments receive specific cybersecurity topics. The intelligence can help you tailor your messaging to address current concerns and capitalise on positive industry momentum.

2. Writing assistance

If you struggle to convey complex ideas and terminology in more accessible language, consider asking the LLM to help simplify your messaging. This can help transform technical specifications into clear, accessible language that non-technical journalists can understand.

Since the headline is the most important part of your release, use an LLM to generate a handful of options based on your core announcement, then select the best one based on clarity and impact. Once your press release is complete, run it through an LLM to identify and replace jargon that might be second nature to your security team but may be confusing to general tech reporters.

3. Visual storytelling

If you are struggling to find ways to explain your product or service in accessible language, visuals can help. AI image generation tools, like Midjourney, create custom visuals based on prompts that help illustrate your message. The latest models can handle highly complex tasks.

With a bit of prompt engineering (and by incorporating the press release you want help with), you should be able to create accompanying images and infographics that bring your message to life.

4. Video content

Going one step further than a static image, a brief AI-generated explainer video can sit alongside your press release, providing journalists with ready-to-use content that explains complex security concepts. Some ideas include:

  • Short Explainer Videos: Use text-to-video tools to turn essential sections of your press release into a brief (60 seconds or less) animated or stock-footage-based video. You can usually use narration and text overlays directly on the AI platforms as well.
  • AI Avatar Summaries: Several tools now enable you to create a brief video featuring an AI avatar that presents the core message of the press release. A human-looking avatar reads out the content and delivers an audio and video component for your release.
  • Data Visualisation Videos: Use AI tools to animate key statistics or processes described in the release for enhanced clarity.

Final word

Even as you use the AI tools you have at your disposal, remember that the most effective cybersecurity press releases still require that all-important human insight and expertise. Your goal isn’t to automate the entire process. Instead, use AI to enhance your cyber PR efforts and make your releases stand out from the crowd.

AI should help emphasise, not replace, the human elements that make security stories so engaging and compelling. Be sure to shine a spotlight on the researchers who made the discovery, the real-world implications of any threat vulnerabilities you uncover, and the people security measures ultimately protect.

Combine this human-focused storytelling with the power of AI automation, and you’ll ensure that your press releases and cyber PR campaigns get the maximum mileage.

The post AI strategies for cybersecurity press releases that get coverage appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-strategies-for-cybersecurity-press-releases-that-get-coverage/feed/ 0
Google launches A2A as HyperCycle advances AI agent interoperability https://www.artificialintelligence-news.com/news/google-launches-a2a-as-hypercycle-advances-ai-agent-interoperability/ https://www.artificialintelligence-news.com/news/google-launches-a2a-as-hypercycle-advances-ai-agent-interoperability/#respond Tue, 22 Apr 2025 14:59:03 +0000 https://www.artificialintelligence-news.com/?p=105406 AI agents handle increasingly complex and recurring tasks, such as planning supply chains and ordering equipment. As organisations deploy more agents developed by different vendors on different frameworks, agents can end up siloed, unable to coordinate or communicate. Lack of interoperability remains a challenge for organisations, with different agents making conflicting recommendations. It’s difficult to […]

The post Google launches A2A as HyperCycle advances AI agent interoperability appeared first on AI News.

]]>
AI agents handle increasingly complex and recurring tasks, such as planning supply chains and ordering equipment. As organisations deploy more agents developed by different vendors on different frameworks, agents can end up siloed, unable to coordinate or communicate. Lack of interoperability remains a challenge for organisations, with different agents making conflicting recommendations. It’s difficult to create standardised AI workflows, and agent integration require middleware, adding more potential failure points and layers of complexity.

Google’s protocol will standardise AI agent communication

Google unveiled its Agent2Agent (A2A) protocol at Cloud Next 2025 in an effort to standardise communication between diverse AI agents. A2A is an open protocol that allows independent AI agents to communicate and cooperate. It complements Anthropic’s Model Context Protocol (MCP), which provides models with context and tools. MCP connects agents to tools and other resources, and A2A connects agents to other agents. Google’s new protocol facilitates collaboration among AI agents on different platforms and vendors, and ensures secure, real-time communication, and task coordination.

The two roles in an A2A-enabled system are a client agent and a remote agent. The client initiates a task to achieve a goal or on behalf of a user, It makes requests which the remote agent receives and acts on. Depending on who initiates the communication, an agent can be a client agent in one interaction and a remote agent in another. The protocol defines a standard message format and workflow for the interaction.

Tasks are at the heart of A2A, with each task representing a work or conversation unit. The client agent sends the request to the remote agent’s send or task endpoint. The request includes instructions and a unique task ID. The remote agent creates a new task and starts working on it.

Google enjoys broad industry support, with contributions from more than 50 technology partners like Intuit, Langchain, MongoDB, Atlassian, Box, Cohere, PayPal, Salesforce, SAP, Workday, ServiceNow, and UKG. Reputable service providers include Capgemini, Cognizant, Accenture, BCG, Deloitte, HCLTech, McKinsey, PwC, TCS, Infosys, KPMG, and Wipro.

How HyperCycle aligns with A2A principles

HyperCycle’s Node Factory framework makes it possible to deploy multiple agents, addressing existing challenges and enabling developers to create reliable, collaborative setups. The decentralised platform is advancing the bold concept of “the internet of AI” and using self-perpetuating nodes and a creative licensing model to enable AI deployments at scale. The framework helps achieve cross-platform interoperability by standardising interactions and supporting agents from different developers so agents can work cohesively, irrespective of origin.

The platform’s peer-to-peer network links agents across an ecosystem, eliminating silos and enabling unified data sharing and coordination across nodes. The self-replicating nodes can scale, reducing infrastructure needs and distributing computational loads.

Each Node Factory replicates up to ten times, with the number of nodes in the Factory doubling each time. Users can buy and operate Node Factories at ten different levels. Growth enhances each Factory’s capacity, fulfilling increasing demand for AI services. One node might host a communication-focused agent, while another supports a data analysis agent. Developers can create custom solutions by crafting multi-agent tools from the nodes they’re using, addressing scalability issues and siloed environments.

HyperCycle’s Node Factory operates in a network using Toda/IP architecture, which parallels TCP/IP. The network encompasses hundreds of thousands of nodes, letting developers integrate third-party agents. A developer can enhance function by incorporating a third-party analytics agent, sharing intelligence, and promoting collaboration across the network.

According to Toufi Saliba, HyperCycle’s CEO, the exciting development from Google around A2A represents a major milestone for his agent cooperation project. The news supports his vision of interoperable, scalable AI agents. In an X post, he said many more AI agents will now be able to access the nodes produced by HyperCycle Factories. Nodes can be plugged into any A2A, giving each AI agent in Google Cloud (and its 50+ partners) near-instant access to AWS agents, Microsoft agents, and the entire internet of AI. Saliba’s statement highlights A2A’s potential and its synergy with HyperCycle’s mission.

The security and speed of HyperCycle’s Layer 0++

HyperCycle’s Layer 0++ blockchain infrastructure offers security and speed, and complements A2A by providing a decentralised, secure infrastructure for AI agent interactions. Layer 0++ is an innovative blockchain operating on Toda/IP, which divides network packets into smaller pieces and distributes them across nodes.

It can also extend the usability of other blockchains by bridging to them, which means HyperCycle can enhance the functionality of Bitcoin, Ethereum, Avalanche, Cosmos, Cardano, Polygon, Algorand, and Polkadot rather than compete with those blockchains.

DeFi, decentralised payments, swarm AI, and other use cases

HyperCycle has potential in areas like DeFi, swarm AI, media ratings and rewards, decentralised payments, and computer processing. Swarm AI is a collective intelligence system where individual agents collaborate to solve complicated problems. They can interoperate more often with HyperCycle, leading to lightweight agents carrying out complex internal processes.

The HyperCycle platform can improve ratings and rewards in media networks through micro-transactions. The ability to perform high-frequency, high-speed, low-cost, on-chain trading presents innumerable opportunities in DeFi.

It can streamline decentralised payments and computer processing by increasing the speed and reducing the cost of blockchain transactions.

HyperCycle’s efforts to improve access to information precede Google’s announcement. In January 2025, the platform announced it had launched a joint initiative with YMCA – an AI app called Hyper-Y that will connect 64 million people in 12,000 YMCA locations across 120 countries, providing staff, members, and volunteers with access to information from the global network.

HyperCycle’s efforts and Google’s A2A converge

Google hopes its protocol will pave the way for collaboration to solve complex problems and will build the protocol with the community, in the open. A2A was released as open-source with plans to set up contribution pathways. HyperCycle’s innovations aim to enable collaborative problem-solving by connecting AI to a global network of specialised abilities as A2A standardises communication between agents regardless of their vendor or build, so introducing more collaborative multi-agent ecosystems.

A2A and Hypercycle bring ease of use, modularity, scalability, and security to AI agent systems. They can unlock a new era of agent interoperability, creating more flexible and powerful agentic systems.

(Image source: Unsplash)

The post Google launches A2A as HyperCycle advances AI agent interoperability appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-launches-a2a-as-hypercycle-advances-ai-agent-interoperability/feed/ 0
Web3 tech helps instil confidence and trust in AI https://www.artificialintelligence-news.com/news/web3-tech-helps-instil-confidence-and-trust-in-ai/ https://www.artificialintelligence-news.com/news/web3-tech-helps-instil-confidence-and-trust-in-ai/#respond Wed, 09 Apr 2025 13:47:57 +0000 https://www.artificialintelligence-news.com/?p=105268 The promise of AI is that it’ll make all of our lives easier. And with great convenience comes the potential for serious profit. The United Nations thinks AI could be a $4.8 trillion global market by 2033 – about as big as the German economy. But forget about 2033: in the here and now, AI […]

The post Web3 tech helps instil confidence and trust in AI appeared first on AI News.

]]>
The promise of AI is that it’ll make all of our lives easier. And with great convenience comes the potential for serious profit. The United Nations thinks AI could be a $4.8 trillion global market by 2033 – about as big as the German economy.

But forget about 2033: in the here and now, AI is already fueling transformation in industries as diverse as financial services, manufacturing, healthcare, marketing, agriculture, and e-commerce. Whether it’s autonomous algorithmic ‘agents’ managing your investment portfolio or AI diagnostics systems detecting diseases early, AI is fundamentally changing how we live and work.

But cynicism is snowballing around AI – we’ve seen Terminator 2 enough times to be extremely wary. The question worth asking, then, is how do we ensure trust as AI integrates deeper into our everyday lives?

The stakes are high: A recent report by Camunda highlights an inconvenient truth: most organisations (84%) attribute regulatory compliance issues to a lack of transparency in AI applications. If companies can’t view algorithms – or worse, if the algorithms are hiding something – users are left completely in the dark. Add the factors of systemic bias, untested systems, and a patchwork of regulations and you have a recipe for mistrust on a large scale.

Transparency: Opening the AI black box

For all their impressive capabilities, AI algorithms are often opaque, leaving users ignorant of how decisions are reached. Is that AI-powered loan request being denied because of your credit score – or due to an undisclosed company bias? Without transparency, AI can pursue its owner’s goals, or that of its owner, while the user remains unaware, still believing it’s doing their bidding.

One promising solution would be to put the processes on the blockchain, making algorithms verifiable and auditable by anyone. This is where Web3 tech comes in. We’re already seeing startups explore the possibilities. Space and Time (SxT), an outfit backed by Microsoft, offers tamper-proof data feeds consisting of a verifiable compute layer, so SxT can ensure that the information on which AI relies is real, accurate, and untainted by a single entity.

Space and Time’s novel Proof of SQL prover guarantees queries are computed accurately against untampered data, proving computations in blockchain histories and being able to do so much faster than state-of-the art zkVMs and coprocessors. In essence, SxT helps establish trust in AI’s inputs without dependence on a centralised power.

Proving AI can be trusted

Trust isn’t a one-and-done deal; it’s earned over time, analogous to a restaurant maintaining standards to retain its Michelin star. AI systems must be assessed continually for performance and safety, especially in high-stakes domains like healthcare or autonomous driving. A second-rate AI prescribing the wrong medicines or hitting a pedestrian is more than a glitch, it’s a catastrophe.

This is the beauty of open-source models and on-chain verification via using immutable ledgers, with built-in privacy protections assured by the use of cryptography like Zero-Knowledge Proofs (ZKPs). Trust isn’t the only consideration, however: Users must know what AI can and can’t do, to set their expectations realistically. If a user believes AI is infallible, they’re more likely to trust flawed output.

To date, the AI education narrative has centred on its dangers. From now on, we should try to improve users’ knowledge of AI’s capabilities and limitations, better to ensure users are empowered not exploited.

Compliance and accountability

As with cryptocurrency, the word compliance comes often when discussing AI. AI doesn’t get a pass under the law and various regulations. How should a faceless algorithm be held accountable? The answer may lie in the modular blockchain protocol Cartesi, which ensures AI inference happens on-chain.

Cartesi’s virtual machine lets developers run standard AI libraries – like TensorFlow, PyTorch, and Llama.cpp – in a decentralised execution environment, making it suitable for on-chain AI development. In other words, a blend of blockchain transparency and computational AI.

Trust through decentralisation

The UN’s recent Technology and Innovation Report shows that while AI promises prosperity and innovation, its development risks “deepening global divides.” Decentralisation could be the answer, one that helps AI scale and instils trust in what’s under the hood.

(Image source: Unsplash)

The post Web3 tech helps instil confidence and trust in AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/web3-tech-helps-instil-confidence-and-trust-in-ai/feed/ 0
4 signals AI will continue to be a narrative in 2025 https://www.artificialintelligence-news.com/news/4-signals-ai-will-continue-to-be-a-narrative-in-2025/ https://www.artificialintelligence-news.com/news/4-signals-ai-will-continue-to-be-a-narrative-in-2025/#respond Mon, 07 Apr 2025 08:45:27 +0000 https://www.artificialintelligence-news.com/?p=105222 AI innovations have taken the world by storm since ChatGPT’s debut in November 2022. We’ve seen more organisations integrate AI into their workflows: a recent survey by McKinsey revealed that 78% of the respondents work in organisations that use AI in at least one business function. What’s more intriguing is the bounce back of Nvidia, […]

The post 4 signals AI will continue to be a narrative in 2025 appeared first on AI News.

]]>
AI innovations have taken the world by storm since ChatGPT’s debut in November 2022. We’ve seen more organisations integrate AI into their workflows: a recent survey by McKinsey revealed that 78% of the respondents work in organisations that use AI in at least one business function.

What’s more intriguing is the bounce back of Nvidia, now the world’s leading chipmaker. Its stock price has surged by 1635% in the past five years, placing it third in terms of value in market capitalisation, slightly behind Apple and Microsoft.

But with AI running hot over the past two years, most investors are questioning how far the AI narrative can extend. Are we close to an extended pullback or are there some fundamentals to keep the AI narrative alive?

While we may witness a period of slowed growth in the medium term, there are several reasons why the AI trend will likely continue much longer. The next section of this article will highlight four key signals that point to this likelihood:

High adoption rate of new AI tech

ChatGPT set the record for the fastest growing user base in the tech sector, eclipsing the 100 million user mark two months after its launch data. Today, there are over 400 million ChatGPT users weekly.

ChatGPT’s latest image generation feature has witnessed massive demand, with users coming to the platform to create Ghibli-style AI: “It’s super fun seeing people love images in ChatGPT, but our GPUs are melting,” said Altman in an X post.

While there have been criticisms about copyright issues, what’s worth noting is the rate at which the world is experimenting with AI innovations. Gone are the days when AI was a niche carved out for tech bros; AI is mainstream.

Advances in digital human innovations

Imagine a digital doppelgänger that can interact with other internet users on your behalf at the same level of personalisation you would. This has long been a dream for many AI enthusiasts and it may finally be becoming a reality.

It is possible to create a hyper-realistic and intelligent digital twin through an AI-powered SaaS platform like Antix. Unlike conventional avatars which lack human qualities, Antix’s digital humans offer a new level of digital interaction. They are designed to deliver realistic interactions, letting users engage in more meaningful and profound ways.

This advance in digital human innovation may transform today’s digital landscape. For the first time, it will be possible for internet users anywhere to create or acquire a digital twin that can evolve or hold conversations according to a user’s preferences and personality. For example, Antix’s digital humans give the flexibility to customise key human attributes, including style, appearance, emotions and voice.

Increased funding in the AI space

According to Statista, global funding in AI startups hit a new high of $100 billion in 2024. The trend will likely continue in 2025 as more countries allocate funding to AI innovations. Here are two key developments this year:

  • President Trump recently announced a $500 billion private sector investment in funding AI infrastructure. The goal is to make the US a leading contender in the AI race.
  • China has allocated $8.2 billion into a new AI fund following a move by the US to go ahead with additional chip export restrictions, targeting Chinese firms.

Accelerated global AI race

A global AI race has been brewing between the US and China. While it may seem like negative engagement, a competitive ecosystem is exactly what the AI space needs for growth.

For example, China’s DeepSeek R1 model was developed at only $6 million, a fraction of the cost to train models like Google’s Gemini and ChatGPT. While some critics have argued it couldn’t have cost that little, the Chinese model has challenged its Western counterparts to be leaner in their AI development processes.

We’re also seeing the likes of Open AI stepping up to roll out more advanced features such as the image generator currently making waves.

Conclusion

AI is one of the four industrial technologies that have gained traction recently. While there are a number of issues yet to be ironed out, including ethical concerns, it is evident that the AI narrative will be a while in fading away. The four factors above are a glimpse of what might sustain this narrative. As of writing, there are more AI innovations than most of us or regulators can keep up with. That alone should be a signal that the nascent industry is just getting started.

The post 4 signals AI will continue to be a narrative in 2025 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/4-signals-ai-will-continue-to-be-a-narrative-in-2025/feed/ 0
From punch cards to mind control: Human-computer interactions https://www.artificialintelligence-news.com/news/from-punch-cards-to-mind-control-human-computer-interactions/ https://www.artificialintelligence-news.com/news/from-punch-cards-to-mind-control-human-computer-interactions/#respond Wed, 05 Mar 2025 15:22:07 +0000 https://www.artificialintelligence-news.com/?p=104721 The way we interact with our computers and smart devices is very different from previous years. Over the decades, human-computer interfaces have transformed, progressing from simple cardboard punch cards to keyboards and mice, and now extended reality-based AI agents that can converse with us in the same way as we do with friends. With each […]

The post From punch cards to mind control: Human-computer interactions appeared first on AI News.

]]>
The way we interact with our computers and smart devices is very different from previous years. Over the decades, human-computer interfaces have transformed, progressing from simple cardboard punch cards to keyboards and mice, and now extended reality-based AI agents that can converse with us in the same way as we do with friends.

With each advance in human-computer interfaces, we’re getting closer to achieving the goal of interactions with machines, making computers more accessible and integrated with our lives.

Where did it all begin?

Modern computers emerged in the first half of the 20th century and relied on punch cards to feed data into the system and enable binary computations. The cards had a series of punched holes, and light was shone at them. If the light passed through a hole and was detected by the machine, it represented a “one”. Otherwise, it was a “zero”. As you can imagine, it was extremely cumbersome, time-consuming, and error-prone.

That changed with the arrival of ENIAC, or Electronic Numerical Integrator and Computer, widely considered to be the first “Turing-complete” device that could solve a variety of numerical problems. Instead of punch cards, operating ENIAC involved manually setting a series of switches and plugging patch cords into a board to configure the computer for specific calculations, while data was inputted via a further series of switches and buttons. It was an improvement over punch cards, but not nearly as dramatic as the arrival of the modern QWERTY electronic keyboard in the early 1950s.

Keyboards, adapted from typewriters, were a game-changer, allowing users to input text-based commands more intuitively. But while they made programming faster, accessibility was still limited to those with knowledge of the highly-technical programming commands required to operate computers.

GUIs and touch

The most important development in terms of computer accessibility was the graphical user interface or GUI, which finally opened computing to the masses. The first GUIs appeared in the late 1960s and were later refined by companies like IBM, Apple, and Microsoft, replacing text-based commands with a visual display made up of icons, menus, and windows.

Alongside the GUI came the iconic “mouse“, which enabled users to “point-and-click” to interact with computers. Suddenly, these machines became easily navigable, allowing almost anyone to operate one. With the arrival of the internet a few years later, the GUI and the mouse helped pave the way for the computing revolution, with computers becoming commonplace in every home and office.

The next major milestone in human-computer interfaces was the touchscreen, which first appeared in the late 1990s and did away with the need for a mouse or a separate keyboard. Users could now interact with their computers by tapping icons on the screen directly, pinching to zoom, and swiping left and right. Touchscreens eventually paved the way for the smartphone revolution that started with the arrival of the Apple iPhone in 2007 and, later, Android devices.

With the rise of mobile computing, the variety of computing devices evolved further, and in the late 2000s and early 2010s, we witnessed the emergence of wearable devices like fitness trackers and smartwatches. Such devices are designed to integrate computers into our everyday lives, and it’s possible to interact with them in newer ways, like subtle gestures and biometric signals. Fitness trackers, for instance, use sensors to keep track of how many steps we take or how far we run, and can monitor a user’s pulse to measure heart rate.

Extended reality & AI avatars

In the last decade, we also saw the first artificial intelligence systems, with early examples being Apple’s Siri and Amazon’s Alexa. AI chatbots use voice recognition technology to enable users to communicate with their devices using their voice.

As AI has advanced, these systems have become increasingly sophisticated and better able to understand complex instructions or questions, and can respond based on the context of the situation. With more advanced chatbots like ChatGPT, it’s possible to engage in lifelike conversations with machines, eliminating the need for any kind of physical input device.

AI is now being combined with emerging augmented reality and virtual reality technologies to further refine human-computer interactions. With AR, we can insert digital information into our surroundings by overlaying it on top of our physical environment. This is enabled using VR devices like the Oculus Rift, HoloLens, and Apple Vision Pro, and further pushes the boundaries of what’s possible.

So-called extended reality, or XR, is the latest take on the technology, replacing traditional input methods with eye-tracking, and gestures, and can provide haptic feedback, enabling users to interact with digital objects in physical environments. Instead of being restricted to flat, two-dimensional screens, our entire world becomes a computer through a blend of virtual and physical reality.

The convergence of XR and AI opens the doors to more possibilities. Mawari Network is bringing AI agents and chatbots into the real world through the use of XR technology. It’s creating more meaningful, lifelike interactions by streaming AI avatars directly into our physical environments. The possibilities are endless – imagine an AI-powered virtual assistant standing in your home or a digital concierge that meets you in the hotel lobby, or even an AI passenger that sits next to you in your car, directing you on how to avoid the worst traffic jams. Through its decentralised DePin infrastructure, it’s enabling AI agents to drop into our lives in real-time.

The technology is nascent but it’s not fantasy. In Germany, tourists can call on an avatar called Emma to guide them to the best spots and eateries in dozens of German cities. Other examples include digital popstars like Naevis, which is pioneering the concept of virtual concerts that can be attended from anywhere.

In the coming years, we can expect to see this XR-based spatial computing combined with brain-computer interfaces, which promise to let users control computers with their thoughts. BCIs use electrodes placed on the scalp and pick up the electrical signals generated by our brains. Although it’s still in its infancy, this technology promises to deliver the most effective human-computer interactions possible.

The future will be seamless

The story of the human-computer interface is still under way, and as our technological capabilities advance, the distinction between digital and physical reality will more blurred.

Perhaps one day soon, we’ll be living in a world where computers are omnipresent, integrated into every aspect of our lives, similar to Star Trek’s famed holodeck. Our physical realities will be merged with the digital world, and we’ll be able to communicate, find information, and perform actions using only our thoughts. This vision would have been considered fanciful only a few years ago, but the rapid pace of innovation suggests it’s not nearly so far-fetched. Rather, it’s something that the majority of us will live to see.

(Image source: Unsplash)

The post From punch cards to mind control: Human-computer interactions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/from-punch-cards-to-mind-control-human-computer-interactions/feed/ 0
Trust meets efficiency: AI and blockchain mutuality https://www.artificialintelligence-news.com/news/trust-meets-efficiency-ai-and-blockchain-mutuality/ https://www.artificialintelligence-news.com/news/trust-meets-efficiency-ai-and-blockchain-mutuality/#respond Fri, 28 Feb 2025 09:10:08 +0000 https://www.artificialintelligence-news.com/?p=104642 Blockchain has tried to claim many things as its own over the years, from global payment processing to real-world assets. But in artificial intelligence, it’s found synergy with a sector willing to give something back. As this symbiotic relationship has grown, it’s become routine to hear AI and blockchain mentioned in the same breath. While […]

The post Trust meets efficiency: AI and blockchain mutuality appeared first on AI News.

]]>
Blockchain has tried to claim many things as its own over the years, from global payment processing to real-world assets. But in artificial intelligence, it’s found synergy with a sector willing to give something back. As this symbiotic relationship has grown, it’s become routine to hear AI and blockchain mentioned in the same breath.

While the benefits web3 technology can bring to artificial intelligence are well documented – transparency, P2P economies, tokenisation, censorship resistance, and so on – this is a reciprocal arrangement. In return, AI is fortifying blockchain projects in different ways, enhancing the ability to process vast datasets, and automating on-chain processes. The relationship may have taken a while to get started, but blockchain and AI are now entwined.

Trust meets efficiency

While AI brings intelligent automation and data-driven decision-making, blockchain offers security, decentralisation, and transparency. Together, they can address each other’s limitations, offering new opportunities in digital and real-world industries. Blockchain provides a tamper-proof foundation and AI brings adaptability, plus the ability to optimise complex systems.

Together, the two promise to enhance scalability, security, and privacy – key pillars for modern finance and supply chain applications.

AI’s ability to analyse large amounts of data is a natural fit for blockchain networks, allowing data archives to be processed in real time. Machine learning algorithms can predict network congestion – as seen with tools like Chainlink’s off-chain computation, which offers dynamic fee adjustments or transaction prioritisation.

Security also gains: AI can monitor blockchain activity in real-time to identify anomalies more quickly than manual scans, so teams can move to mitigate attacks. Privacy is improved, with AI managing zero-knowledge proofs and other cryptographic techniques to shield user data; methods explored by projects like Zcash. These types of enhancements make blockchain more robust and attractive to the enterprise.

In DeFi, Giza‘s agent-driven markets embody the convergence of web3 and artificial intelligence. Its protocol runs autonomous agents like ARMA, which manage yield strategies across protocols and offer real-time adaptation. Secured by smart accounts and decentralised execution, agents can deliver positive yields, and currently manage hundreds of thousands of dollars in on-chain assets. Giza shows how AI can optimise decentralised finance and is a project that uses the two technologies to good effect.

Blockchain as AI’s backbone

Blockchain offers AI a decentralised infrastructure to foster trust and collaboration. AI models, often opaque and centralised, face scrutiny over data integrity and bias – issues blockchain counters with transparent, immutable records. Platforms like Ocean Protocol use blockchain to log AI training data, providing traceability without compromising ownership. That can be a boon for sectors like healthcare, where the need for verifiable analytics is important.

Decentralisation also enables secure multi-party computation, where AI agents collaborate across organisations – think federated learning for drug discovery – without a central authority, as demonstrated in 2024 by IBM’s blockchain AI pilots. The trustless framework reduces reliance on big tech, helping to democratise AI.

While AI can enhance blockchain performance, blockchain itself can provide a foundation for ethical and secure AI deployment. The transparency and immutability with which blockchain is associated can mitigate AI-related risks by ensuring AI model integrity, for example. AI algorithms and training datasets can be recorded on-chain so they’re auditable. Web3 technology helps in governance models for AI, as stakeholders can oversee and regulate project development, reducing the risks of biased or unethical AI.

Digital technologies with real-world impact

The synergy between blockchain and AI exists now. In supply chains, AI helps to optimise logistics while blockchain can track item provenance. In energy, blockchain-based smart grids paired with AI can predict demand; Siemens reported a 15% efficiency gain in a 2024 trial of such a system in Germany. These cases highlight how AI scales blockchain’s utility, while the latter’s security can realise AI’s potential. Together, they create smart, reliable systems.

The relationship between AI and blockchain is less a merger than a mutual enhancement. Blockchain’s trust and decentralisation ground AI’s adaptability, while AI’s optimisation unlocks blockchain’s potential beyond that of a static ledger. From supply chain transparency to DeFi’s capital efficiency, their combined impact is tangible, yet their relationship is just beginning.

(Image source: Unsplash)

The post Trust meets efficiency: AI and blockchain mutuality appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/trust-meets-efficiency-ai-and-blockchain-mutuality/feed/ 0
The role of machine learning in enhancing cloud-native container security https://www.artificialintelligence-news.com/news/the-role-of-machine-learning-in-enhancing-cloud-native-container-security/ https://www.artificialintelligence-news.com/news/the-role-of-machine-learning-in-enhancing-cloud-native-container-security/#respond Wed, 12 Feb 2025 16:47:35 +0000 https://www.artificialintelligence-news.com/?p=104410 The advent of more powerful processors in the early 2000’s started the computing revolution that led to what we now call the cloud. With single hardware instances able to run dozens, if not hundreds of virtual machines concurrently, businesses could offer their users multiple services and applications that would otherwise have been financially impractical, if […]

The post The role of machine learning in enhancing cloud-native container security appeared first on AI News.

]]>
The advent of more powerful processors in the early 2000’s started the computing revolution that led to what we now call the cloud. With single hardware instances able to run dozens, if not hundreds of virtual machines concurrently, businesses could offer their users multiple services and applications that would otherwise have been financially impractical, if not impossible.

But virtual machines (VMs) have several downsides. Often, an entire virtualised operating system is overkill for many applications, and although very much more malleable, scalable, and agile than a fleet of bare-metal servers, VMs still require significantly more memory and processing power, and are less agile than the next evolution of this type of technology – containers. In addition to being more easily scaled (up or down, according to demand), containerised applications consist of only the necessary parts of an application and its supporting dependencies. Therefore apps based on micro-services tend to be lighter and more easily configurable.

Virtual machines exhibit the same security issues that affect their bare-metal counterparts, and to some extent, container security issues reflect those of their component parts: a mySQL bug in a specific version of the upstream application will affect containerised versions too. With regards to VMs, bare metal installs, and containers, cybersecurity concerns and activities are very similar. But container deployments and their tooling bring specific security challenges to those charged with running apps and services, whether manually piecing together applications with choice containers, or running in production with orchestration at scale.

Container-specific security risks

  • Misconfiguration: Complex applications are made up of multiple containers, and misconfiguration – often only a single line in a .yaml file, can grant unnecessary privileges and increase the attack surface. For example, although it’s not trivial for an attacker to gain root access to the host machine from a container, it’s still a too-common practice to run Docker as root, with no user namespace remapping, for example.
  • Vulnerable container images: In 2022, Sysdig found over 1,600 images identified as malicious in Docker Hub, in addition to many containers stored in the repo with hard-coded cloud credentials, ssh keys, and NPM tokens. The process of pulling images from public registries is opaque, and the convenience of container deployment (plus pressure on developers to produce results, fast) can mean that apps can easily be constructed with inherently insecure, or even malicious components.
  • Orchestration layers: For larger projects, orchestration tools such as Kubernetes can increase the attack surface, usually due to misconfiguration and high levels of complexity. A 2022 survey from D2iQ found that only 42% of applications running on Kubernetes made it into production – down in part to the difficulty of administering large clusters and a steep learning curve.

According to Ari Weil at Akamai, “Kubernetes is mature, but most companies and developers don’t realise how complex […] it can be until they’re actually at scale.”

Container security with machine learning

The specific challenges of container security can be addressed using machine learning algorithms trained on observing the components of an application when it’s ‘running clean.’ By creating a baseline of normal behaviour, machine learning can identify anomalies that could indicate potential threats from unusual traffic, unauthorised changes to configuration, odd user access patterns, and unexpected system calls.

ML-based container security platforms can scan image repositories and compare each against databases of known vulnerabilities and issues. Scans can be automatically triggered and scheduled, helping prevent the addition of harmful elements during development and in production. Auto-generated audit reports can be tracked against standard benchmarks, or an organisation can set its own security standards – useful in environments where highly-sensitive data is processed.

The connectivity between specialist container security functions and orchestration software means that suspected containers can be isolated or closed immediately, insecure permissions revoked, and user access suspended. With API connections to local firewalls and VPN endpoints, entire environments or subnets can be isolated, or traffic stopped at network borders.

Final word

Machine learning can reduce the risk of data breach in containerised environments by working on several levels. Anomaly detection, asset scanning, and flagging potential misconfiguration are all possible, plus any degree of automated alerting or amelioration are relatively simple to enact.

The transformative possibilities of container-based apps can be approached without the security issues that have stopped some from exploring, developing, and running microservice-based applications. The advantages of cloud-native technologies can be won without compromising existing security standards, even in high-risk sectors.

(Image source)

The post The role of machine learning in enhancing cloud-native container security appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/the-role-of-machine-learning-in-enhancing-cloud-native-container-security/feed/ 0
How AI and web 3.0 can reshape digital interactions https://www.artificialintelligence-news.com/news/how-ai-and-web-3-0-can-reshape-digital-interactions/ https://www.artificialintelligence-news.com/news/how-ai-and-web-3-0-can-reshape-digital-interactions/#respond Fri, 07 Feb 2025 15:48:52 +0000 https://www.artificialintelligence-news.com/?p=104212 Digital interactions have become a major part of our life; according to the latest statistics, there were over 5.52 billion internet users as of October 2024, with 67.5% being social media users. But despite the prominence of the digital space in today’s world, most of the interactions are still subpar when it comes to the […]

The post How AI and web 3.0 can reshape digital interactions appeared first on AI News.

]]>
Digital interactions have become a major part of our life; according to the latest statistics, there were over 5.52 billion internet users as of October 2024, with 67.5% being social media users.

But despite the prominence of the digital space in today’s world, most of the interactions are still subpar when it comes to the aspect of personalisation. What does this mean?

Put simply, the different categories of internet users which include individuals, companies, and influencers, do not have the flexibility or options to fully express their individuality, customise content or provide targeted services for specific markets.

Most of the digital platforms that currently exist only provide an avenue for internet users to create static profiles made up of personal data.

This should not be the case in a world where most interactions happen online. Digital profiles ought to be more than a collection of data; they should mimic a fully-developed persona that internet users can use to express themselves authentically or in a more personalised way in their digital interactions.

Setting the stage for futuristic digital interactions

Innovation did not stop with the internet or Web 2.0 social media networks. We now have more advanced technologies, notably AI and web 3.0, which are proving to be game-changers in the hyper-personalisation of digital experiences. So, how are the two technologies adding value to today’s static digital profiles?

Let’s start with AI. Innovations in this space have been the talk of the technology community and beyond, with significant funding flowing into the industry over the past two years. While most people are only familiar with generative AI use cases, this nascent technology has the potential to support the creation of hyper-realistic and intelligent digital human avatars that could replace static profiles or business chatbots whose capabilities remain limited.

On the other hand, web 3.0 introduces a futuristic digital space where personalised avatars can interact, trade or engage in more advanced activities like hosting meetings or events. Although possible with web 2.0 platforms as well, web 3.0 innovations are going a level higher to feature NFTs and utility tokens, which let users create adaptable human avatars or purchase advanced customisation features to make avatars more personalised.

A case study of the Antix AI-powered SaaS platform

Antix is one of the few innovations that currently uses integrated GPT-4.0 support and a web 3.0 utility token to create hyper-realistic and intelligent digital human avatars.

The AI-powered software-as-a-service (SaaS) platform enhances digital interactions by providing individuals, companies, and influencers an opportunity to use hyper-personalised digital humans to deliver hyper-realistic interactions.

Antix’s digital humans use advanced machine learning and natural language processing to make digital interactions more personalised. Notably, digital humans are designed as non-fungible tokens (NFTs) which means they can evolve alongside the owner. Internet citizens can use the Antix platform to create highly personalised and adaptable digital profiles that feature a multitude of customisations which include style, emotions, appearance, and voice.

Antix’s digital humans can be customised to operate as the face of a brand by representing it in the digital space, and perform key functions like engaging with an audience, and hosting virtual events and marketing campaigns. Digital humans perform customer support functions better than typical chatbots because of their personalised make up.

Digital humans could be useful for influencers consistently producing new content for their audience. Instead of shooting content themselves, influencers can delegate the role to Antix’s digital humans. Some of the benefits of adopting this approach include reduction in equipment cost, simplified content adaptation, and the option to remain anonymous.

It is also important to highlight that this ecosystem is powered by a utility token dubbed, $ANTIX. The token supports key functions in the Antix platform, including subscription purchases, asset repairs, and ecosystem rewards.

A new dawn for digital interactions

For almost three decades now, digital interactions have mostly revolved around static personas. This could be about to change; advancements in 4IR technologies like AI and web 3.0 are bringing more value to the digital space.

While it may take a few years before most people embrace the concept of AI-powered digital humans and decentralised marketplaces, it is only a matter of time before demand for digital twins which mimic real-life personas hits the roof. The shift will mark a new dawn, a time when digital interactions are not only hyper-personalised but feel almost real.

Web 3.0 is poised to be the economic powerhouse of the digital interaction space. In fact, we’re already seeing this evolution with AI-powered agents tasked with operations in the cryptocurrency economy. It is not a question if, but rather when digital humans will become one of the main forms of interaction on the internet.

(Image source: Unsplash)

The post How AI and web 3.0 can reshape digital interactions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-ai-and-web-3-0-can-reshape-digital-interactions/feed/ 0
OpenAI targets business sector with advanced AI tools https://www.artificialintelligence-news.com/news/openai-targets-business-sector-advanced-ai-tools/ https://www.artificialintelligence-news.com/news/openai-targets-business-sector-advanced-ai-tools/#respond Fri, 24 Jan 2025 13:30:53 +0000 https://www.artificialintelligence-news.com/?p=16955 OpenAI, the powerhouse behind ChatGPT, is ramping up efforts to dominate the enterprise market with a suite of AI tools tailored for business users. The company recently revealed its plans to introduce a series of enhancements designed to make AI integration seamless for companies of all sizes. This includes updates to its flagship AI agent […]

The post OpenAI targets business sector with advanced AI tools appeared first on AI News.

]]>
OpenAI, the powerhouse behind ChatGPT, is ramping up efforts to dominate the enterprise market with a suite of AI tools tailored for business users.

The company recently revealed its plans to introduce a series of enhancements designed to make AI integration seamless for companies of all sizes. This includes updates to its flagship AI agent technology, expected to transform workplace productivity by automating complex workflows, from financial analysis to customer service.

“Businesses are looking for solutions that go beyond surface-level assistance. Our agents are designed to provide in-depth, actionable insights,” said Sarah Friar, CFO of OpenAI. “This is particularly relevant as enterprises seek to streamline operations in today’s competitive landscape.”

OpenAI’s corporate strategy builds on its ongoing collaborations with tech leaders such as Microsoft, which has already integrated OpenAI’s technology into its Azure cloud platform. Analysts say these partnerships position OpenAI to rival established enterprise solutions providers like Salesforce and Oracle.

AI research assistant tools 

As part of its enterprise-focused initiatives, OpenAI is emphasising the development of AI research tools that cater to specific industries. 

For instance, its AI models are being trained on legal and medical data to create highly specialised assistants that could redefine research-intensive sectors. This focus aligns with the broader market demand for AI-driven solutions that enhance decision-making and efficiency.

Infrastructure for expansion 

OpenAI’s rapid growth strategy is supported by a robust infrastructure push. The company has committed to building state-of-the-art data centers in Europe and Asia, aiming to lower latency and improve service reliability for global users. These investments reflect OpenAI’s long-term vision of becoming a critical enabler in the AI-driven global economy.

Challenges and issues

However, challenges persist. The company faces mounting pressure from regulators concerned about data privacy and the ethical implications of deploying powerful AI tools. Critics also question the sustainability of OpenAI’s ambitious growth targets, given its significant operational costs and strong competition from other tech giants.

Despite these hurdles, OpenAI remains optimistic about its trajectory. With plans to unveil its expanded portfolio at the upcoming Global AI Summit, the company is well-positioned to strengthen its foothold in the burgeoning AI enterprise market.

(Editor’s note: This article is sponsored by AI Tools Network)

See also: OpenAI argues against ChatGPT data deletion in Indian court

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI targets business sector with advanced AI tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-targets-business-sector-advanced-ai-tools/feed/ 0
Copyright concerns create need for a fair alternative in AI sector https://www.artificialintelligence-news.com/news/copyright-concerns-create-need-for-a-fair-alternative-in-ai-sector/ https://www.artificialintelligence-news.com/news/copyright-concerns-create-need-for-a-fair-alternative-in-ai-sector/#respond Thu, 09 Jan 2025 14:27:11 +0000 https://www.artificialintelligence-news.com/?p=16834 When future generations look back at the rise of artificial intelligence technologies, the year 2025 may be remembered as a major turning point, when the industry took concrete steps towards greater inclusion, and embraced decentralised frameworks that recognise and fairly compensate every stakeholder. The growth of AI has already sparked transformation in multiple industries, but […]

The post Copyright concerns create need for a fair alternative in AI sector appeared first on AI News.

]]>
When future generations look back at the rise of artificial intelligence technologies, the year 2025 may be remembered as a major turning point, when the industry took concrete steps towards greater inclusion, and embraced decentralised frameworks that recognise and fairly compensate every stakeholder.

The growth of AI has already sparked transformation in multiple industries, but the pace of uptake has also led to concerns around data ownership, privacy and copyright infringement. Because AI is centralised with the most powerful models controlled by corporations, content creators have largely been sidelined.

OpenAI, the world’s most prominent AI company, has already admitted that’s the case. In January 2024, it told the UK’s House of Lords Communications and Digital Select Committee that it would not have been able to create its iconic chatbot, ChatGPT, without training it on copyrighted material.

OpenAI trained ChatGPT on everything that was posted on the public internet prior to 2023, but the people who created that content – much of which is copyrighted – have not been paid any compensation; a major source of contention.

There’s an opportunity for decentralised AI projects like that proposed by the ASI Alliance to offer an alternative way of AI model development. The Alliance is building a framework that gives content creators a method to retain control over their data, along with mechanisms for fair reward should they choose to share their material with AI model makers. It’s a more ethical basis for AI development, and 2025 could be the year it gets more attention.

AI’s copyright conundrum

OpenAI isn’t the only AI company that’s been accused of copyright infringement. The vast majority of AI models, including those that purport to be open-source, like Meta Platforms’ Llama 3 model, are guilty of scraping the public internet for training data.

AI developers routinely help themselves to whatever content they find online, ignoring the fact that much of the material is copyrighted. Copyright laws are designed to protect the creators of original works, like books, articles, songs, software, artworks and photos, from being exploited, and make unauthorised use of such materials illegal.

The likes of OpenAI, Meta, Anthropic, StabilityAI, Perplexity AI, Cohere, and AI21 Labs get round the law by claiming ‘fair use,’ reference to an ambiguous clause in copyright law that allows the limited use of protected content without the need to obtain permission from the creator. But there’s no clear definition of what actually constitutes ‘fair use,’ and many authors claim that AI threatens their livelihoods.

Many content creators have resorted to legal action, with a prominent lawsuits filed by the New York Times against OpenAI. In the suit, the Times alleges that OpenAI committed copyright infringement when it ingested thousands of articles to train its large language models. The media organisation claims that such practice is unlawful, as ChatGPT is a competing product that aims to ‘steal audience’ from the Times website.

The lawsuit has led to a debate – should AI companies be allowed to keep consuming any content on the internet, or should they be compelled to ask for permission first, and compensate those who create training data?

Consensus appears to be shifting toward the latter. For instance, the late former OpenAI researcher Suchir Balaji, told the Times in an interview that he was tasked with leading the collection of data to train ChatGPT’s models. He said his job involved scraping content from every possible source, including user-generated posts on social media, pirated book archives and articles behind paywalls. All content was scraped without permission being sought, he said.

Balaji said he initially bought OpenAI’s argument that if the information was posted online and freely available, scraping constituted fair use. However, he said that later, he began to question the stance after realising that products like ChatGPT could harm content creators. Ultimately, he said, he could no longer justify the practice of scraping data, resigning from the company in the summer of 2024.

A growing case for decentralised AI

Balaji’s departure from OpenAI appears to coincide with a realisation among AI companies that the practice of helping themselves to any content found online is unsustainable, and that content creators need legal protection.

Evidence of this comes from the spate of content licensing deals announced over the last year. OpenAI has agreed deals with a number of high-profile content publishers, including the Financial Times, NewsCorp, Conde Nast, Axel Springer, Associated Press, and Reddit, which hosts millions of pages of user-generated content on its forums. Other AI developers, like Google, Microsoft and Meta, have forged similar partnerships.

But it remains to be seen if these arrangements will prove to be satisfactory, especially if AI firms generate billions of dollars in revenue. While the terms of the content licensing deals haven’t been made public, The Information claims they are worth a few million dollars per year at most. Considering that OpenAI’s former chief scientist Ilya Sutskever was paid a salary of $1.9 million in 2016, the money offered to publishers may fall short of what content is really worth.

There’s also the fact that millions of smaller content creators – like bloggers, social media influencers etc. – continue to be excluded from deals.

The arguments around AI’s infringement of copyright are likely to last years without being resolved, and the legal ambiguity around data scraping, along with the growing recognition among practitioners that such practices are unethical, are helping to strengthen the case for decentralised frameworks.

Decentralised AI frameworks provide developers with a more principled model for AI training where the rights of content creators are respected, and where every contributor can be rewarded fairly.

Sitting at the heart of decentralised AI is blockchain, which enables the development, training, deployment, and governance of AI models across distributed, global networks owned by everyone. This means everyone can participate in building AI systems that are transparent, as opposed to centralised, corporate-owned AI models that are often described as “black boxes.”

Just as the arguments around AI copyright infringement intensify, decentralised AI projects are making inroads; this year promises to be an important one in the shift towards more transparent and ethical AI development.

Decentralised AI in action

Late in 2024, three blockchain-based AI startups formed the Artificial Superintelligence (ASI) Alliance, an organisation working towards the creation of a “decentralised superintelligence” to power advanced AI systems anyone can use.

The ASI Alliance says it’s the largest open-source, independent player in AI research and development. It was created by SingularityNET, which has developed a decentralised AI network and compute layer; Fetch.ai, focused on building autonomous AI agents that can perform complex tasks without human assistance; and Ocean Protocol, the creator of a transparent exchange for AI training data.

The ASI Alliance’s mission is to provide an alternative to centralised AI systems, emphasising open-source and decentralised platforms, including data and compute resources.

To protect content creators, the ASI Alliance is building an exchange framework based on Ocean Protocol’s technology, where anyone can contribute data to be used for AI training. Users will be able to upload data to the blockchain-based system and retain ownership of it, earning rewards whenever it’s accessed by AI models or developers. Others will be able to contribute by helping to label and annotate data to make it more accessible to AI models, and earn rewards for performing this work. In this way, the ASI Alliance promotes a more ethical way for developers to obtain the training data they need to create AI models.

Shortly after forming, the Alliance launched the ASI<Train/> initiative, focused on the development of more transparent and ethical “domain-specific models” specialising in areas like robotics, science, and medicine. Its first model is Cortex, which is said to be modeled on the human brain and designed to power autonomous robots in real-world environments.

The specialised models differ from general-purpose LLMs, which are great at answering questions and creating content and images, but less useful when asked to solve more complex problems that require significant expertise. But creating specialised models will be a community effort: the ASI Alliance needs industry experts to provide the necessary data to train models.

Fetch.ai’s CEO Humayun Sheikh said the ASI Alliance’s decentralised ownership model creates an ecosystem “where individuals support groundbreaking technology and share in value creation.”

Users without specific knowledge can buy and “stake” FET tokens to become part-owners of decentralised AI models and earn a share of the revenue they generate when they’re used by AI applications.

For content creators, the benefits of a decentralised approach to AI are clear. ASI’s framework lets them keep control of their data and track when it’s used by AI models. It integrates mechanisms encoded in smart contracts to ensure that everyone is fairly compensated. Participants earn rewards for contributing computational resources, data, and expertise, or by supporting the ecosystem through staking.

The ASI Alliance operates a model of decentralised governance, where token holders can vote on key decisions to ensure the project evolves to benefit stakeholders, rather than the shareholders of corporations.

AI for everyone is a necessity

The progress made by decentralised AI is exciting, and it comes at a time when it’s needed. AI is evolving quickly and centralised AI companies are currently at the forefront of adoption; for many, a major cause of concern.

Given the transformative potential of AI and the risks it poses to individual livelihoods, it’s important that the industry shifts to more responsible models. AI systems should be developed for the benefit of everyone, and this means every contributor rewarded for participation. Only decentralised AI systems have shown they can do this.

Decentralised AI is not just a nice-to-have but a necessity, representing the only viable alternative capable of breaking big tech’s stranglehold on creativity.

The post Copyright concerns create need for a fair alternative in AI sector appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/copyright-concerns-create-need-for-a-fair-alternative-in-ai-sector/feed/ 0
AI and Big Data Expo Global: Less than 4 weeks to go! https://www.artificialintelligence-news.com/news/ai-and-big-data-expo-global-less-than-4-weeks-to-go/ https://www.artificialintelligence-news.com/news/ai-and-big-data-expo-global-less-than-4-weeks-to-go/#respond Wed, 08 Jan 2025 18:47:19 +0000 https://www.artificialintelligence-news.com/?p=16824 AI and Big Data Expo Global is under four weeks away. Set to take place at the Olympia, London, on 5-6 February 2025, this must-attend artificial intelligence and big data event is for professionals from all industries looking to learn more about the newest technology solutions. Register here: https://www.ai-expo.net/global/ Key highlights: In today’s landscape, AI […]

The post AI and Big Data Expo Global: Less than 4 weeks to go! appeared first on AI News.

]]>
AI and Big Data Expo Global is under four weeks away. Set to take place at the Olympia, London, on 5-6 February 2025, this must-attend artificial intelligence and big data event is for professionals from all industries looking to learn more about the newest technology solutions.

Register here: https://www.ai-expo.net/global/

Key highlights:

  • Headline speakers: The event boasts a stellar line-up of more than 150 speakers from leading global organisations including NVIDIA, LinkedIn, Unilever, Sainsbury’s, Co-op, Salesforce, BT Group, Meta, Lloyds Banking Group, Philips, The Economist, Jaguar Land Rover, and many others. These industry leaders will share their expertise and visions on how AI and Big Data are shaping the future across various sectors.
  • Industry-leading agenda including:
    • Strategic insights into the convergence of machine learning, natural language processing, and neural architectures shaping AI’s future.
    • Explore how AI is transforming businesses globally, beyond just augmenting intelligence.
    • Understand how AI impacts work, organisational culture, trust, and leadership.
    • Examine AI’s effect on skills, human-AI collaboration, and the workplace experience.
    • Empower your organisation to navigate the AI transformation journey.
    • Dive into advanced analytics and AI for smarter, data-driven business decisions.
  • Networking opportunities: With more than 7,000 attendees expected, the AI and Big Data Expo offers opportunities for networking, including the Networking drinks on Day 1 of the event. Plus, utilise our AI-powered matchmaking tool to connect with potential collaborators, clients and thought leaders from around the globe.
  • Co-located shows: Gain access to nine co-located events, covering a wide range of technological innovations and trends. This multi-event format ensures attendees can explore the intersection of AI, big data and other emerging technologies.
  • Exhibition floor: Discover the latest innovations from more than 150 industry-leading solution providers, including Salesforce, Experian, Edge Impulse, Snowflake, Coursera and more. The exhibition floor is your gateway to seeing cutting-edge products and services first-hand, offering solutions that can transform your business.

In today’s landscape, AI isn’t just a tool—it’s a strategic imperative. Executives and senior employees need to stay ahead of emerging trends to drive innovation, efficiency, and growth across their organisations.

Discover how AI can transform your business! Dive deep into cutting-edge sessions covering everything from AI ethics and infrastructure to human-AI collaboration and revolutionary use cases.

Register today:

Don’t miss your chance to attend this world-leading event and elevate your AI expertise. Secure your pass today by visiting our registration page.

About AI & Big Data Expo:

The AI and Big Data Expo is part of TechEx—the leading technology event: https://lnkd.in/erp6-F_M. Prepare for two days of unrivalled access to the trends and innovations shaping the future of AI, automation, and big data. Plus, gain access to nine co-located events all under the TechEx Events Series. Don’t miss out!

We look forward to welcoming you to the AI & Big Data Expo Global in London!

The post AI and Big Data Expo Global: Less than 4 weeks to go! appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-and-big-data-expo-global-less-than-4-weeks-to-go/feed/ 0
What might happen if AI can feel emotions? https://www.artificialintelligence-news.com/news/what-might-happen-if-ai-can-feel-emotions/ https://www.artificialintelligence-news.com/news/what-might-happen-if-ai-can-feel-emotions/#respond Thu, 19 Dec 2024 08:51:31 +0000 https://www.artificialintelligence-news.com/?p=16732 In a world where artificial intelligence is becoming omnipresent, it’s fascinating to think about the prospect of AI-powered robots and digital avatars that can experience emotions, similar to humans. AI models lack consciousness and they don’t have the capacity to feel emotions, but what possibilities might arise if that were to change? The birth of […]

The post What might happen if AI can feel emotions? appeared first on AI News.

]]>
In a world where artificial intelligence is becoming omnipresent, it’s fascinating to think about the prospect of AI-powered robots and digital avatars that can experience emotions, similar to humans.

AI models lack consciousness and they don’t have the capacity to feel emotions, but what possibilities might arise if that were to change?

The birth of emotional AI

The prospect of an AI system embracing those first sparks of emotion is perhaps not as far-fetched as one might think. Already, AI systems have some ability to gauge people’s emotions, and increasingly they’re also able to replicate those feelings in their interactions with humans.

It still requires a leap of faith to imagine an AI that could feel genuine emotions, but if it ever becomes possible, we’d imagine that they’ll be somewhat basic at first, similar to those of a child. Perhaps, an AI system might be able to feel joy at successfully completing a task, or maybe even confusion when presented with a challenge it doesn’t know how to solve. From there, it’s not difficult to envision that feeling of confusion evolving to one of frustration at its repeated failures to tackle the problem in question. And as this system evolves further, perhaps its emotional spectrum might expand to even feel a tinge of sadness or regret.

Should AI ever be able to feel such emotions, it wouldn’t be long before they could express more nuanced feelings, like excitement, impatience, and empathy for humans and other AIs. For instance, in a scenario where an AI system acquires a new skill or solves a new kind of problem, it might be able to experience a degree of satisfaction in success. This is similar to how humans feel when they solve a particularly taxing challenge, like a complex jigsaw puzzle, or when they do something for the first time, like driving a car.

Empathy as a motivator

As AI’s ability to feel emotion evolves, it would become increasingly complex, progressing to a stage where it can even feel empathy for others. Empathy is one of the most complex human emotions, involving understanding and sharing the feelings of someone else.

If AI can experience such feelings, they may inspire it to become more helpful, similar to how humans are sometimes motivated to help someone less fortunate.

An AI that’s designed to assist human doctors might feel sad for someone who is afflicted by a mysterious illness. The feelings might push it to try harder to find a diagnosis for the rare disease that person is suffering from. If it gets it right, the AI might feel an overwhelming sense of accomplishment at doing so, knowing that the afflicted patient will be able to receive the treatment they need.

Or we can consider an AI system that’s built to detect changes to an environment. If such a system were to recognise a substantial increase in pollution in a certain area, it might feel disappointed or even saddened by such a discovery. But like with humans, the feelings might also inspire the AI to find ways to prevent this new source of pollution, perhaps by inventing a more efficient way to recycle or dispose of the toxic substance responsible.

In a similar way, an AI system that encounters numerous errors in a dataset might be compelled to refine its algorithm to reduce the number of errors.

This would also have a direct impact on human-to-AI interactions. It’s not hard to imagine that an AI-powered customer service bot that feels empathy for a customer might be willing to go the extra mile to help resolve that person’s problem. Or alternatively, we might get AI teachers with a better understanding of their students’ emotions, which can then adapt teaching methods appropriately.

Empathetic AI could transform the way we treat people with mental health issues. The concept of a digital therapist is not new, but if a digital therapist can better relate to their patients on an emotional level, it can figure out how best to support them.

Is this even possible?

Surprisingly, we may not be that far off. AI systems like Antix are already capable of expressing artificial empathy. It’s a platform for creating digital humans that are programmed to respond sympathetically when they recognise feelings of frustration, anger or upset in the people they interact with. Its digital humans can detect people’s emotions based on their speech, the kinds of words they use, intonation, and body language.

The ability of Antix’s digital humans to understand emotion is partly based on the way they are trained. Each digital human is a unique non-fungible token or NFT that learns over time from its users, gaining more knowledge and evolving so it can adapt its interactions in response to an individual’s behaviour or preferences.

Because digital humans can recognise emotions and replicate them, they have the potential to deliver more profound and meaningful experiences. Antix utilises the Unreal Engine 5 platform to give its creations a more realistic appearance. Creators can alter almost every aspect of their digital humans, including the voice and appearance, with the ability to edit skin tone, eye colour, and small details like eyebrows and facial hair.

What sets Antix apart from other AI platforms is that users can customise the behaviour of their digital humans, to provide the most appropriate emotional response in different scenarios. Thus, digital humans can respond with an appropriate tone of voice, making the right gestures and expressions when they’re required to feel sad, for example, before transforming in an instant to express excitement, happiness, or joy.

AI is getting real

Emotional AI systems are a work in progress, and the result will be digital humans that feel more lifelike in any scenario where they can be useful.

The CEO of Zoom has talked about the emergence of AI-powered digital twins that can participate in video calls on their user’s behalf, allowing the user to be in two places at once, so to speak. If the digital human version of your boss can express empathy, satisfaction, excitement and anger, the concept would be more effective, fostering a more realistic connection, even if the real boss isn’t present in their physical form.

A customer service-focused digital human that’s able to empathise with callers will likely have a tremendous impact on customer satisfaction, and a sympathetic digital teacher might find ways to elicit more positive responses from its students, accelerating the speed at which they learn.

With digital humans capable of expressing emotions, the potential for more realistic, lifelike, and immersive experiences is almost limitless, and it will result in more rewarding and beneficial interactions with AI systems. 

The post What might happen if AI can feel emotions? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/what-might-happen-if-ai-can-feel-emotions/feed/ 0