The post Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors appeared first on AI News.
]]>Designed to integrate seamlessly with existing clinical workflows, Nova-3 Medical aims to address the growing need for accurate and efficient transcription in the UK’s public NHS and private healthcare landscape.
As electronic health records (EHRs), telemedicine, and digital health platforms become increasingly prevalent, the demand for reliable AI-powered transcription has never been higher. However, traditional speech-to-text models often struggle with the complex and specialised vocabulary used in clinical settings, leading to errors and “hallucinations” that can compromise patient care.
Deepgram’s Nova-3 Medical is engineered to overcome these challenges. The model leverages advanced machine learning and specialised medical vocabulary training to accurately capture medical terms, acronyms, and clinical jargon—even in challenging audio conditions. This is particularly crucial in environments where healthcare professionals may move away from recording devices.
“Nova‑3 Medical represents a significant leap forward in our commitment to transforming clinical documentation through AI,” said Scott Stephenson, CEO of Deepgram. “By addressing the nuances of clinical language and offering unprecedented customisation, we are empowering developers to build products that improve patient care and operational efficiency.”
One of the key features of the model is its ability to deliver structured transcriptions that integrate seamlessly with clinical workflows and EHR systems, ensuring vital patient data is accurately organised and readily accessible. The model also offers flexible, self-service customisation, including Keyterm Prompting for up to 100 key terms, allowing developers to tailor the solution to the unique needs of various medical specialties.
Versatile deployment options – including on-premises and Virtual Private Cloud (VPC) configurations – ensure enterprise-grade security and HIPAA compliance, which is crucial for meeting UK data protection regulations.
“Speech-to-text for enterprise use cases is not trivial, and there is a fundamental difference between voice AI platforms designed for enterprise use cases vs entertainment use cases,” said Kevin Fredrick, Managing Partner at OneReach.ai. “Deepgram’s Nova-3 model and Nova-3-Medical model, are leading voice AI offerings, including TTS, in terms of the accuracy, latency, efficiency, and scalability required for enterprise use cases.”
Deepgram has conducted benchmarking to demonstrate the performance of Nova-3 Medical. The model claims to deliver industry-leading transcription accuracy, optimising both overall word recognition and critical medical term accuracy.
In addition to accuracy, Nova-3 Medical excels in real-time applications. The model transcribes speech 5-40x faster than many alternative speech recognition vendors, making it ideal for telemedicine and digital health platforms. Its scalable architecture ensures high performance even as transcription volumes increase.
Furthermore, Nova-3 Medical is designed to be cost-effective. Starting at $0.0077 per minute of streaming audio – which Deepgram claims is more than twice as affordable as leading cloud providers – it allows healthcare tech companies to reinvest in innovation and accelerate product development.
Deepgram’s Nova-3 Medical aims to empower developers to build transformative medical transcription applications, driving exceptional outcomes across healthcare.
(Photo by Alexander Sinn)
See also: Autoscience Carl: The first AI scientist writing peer-reviewed papers
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors appeared first on AI News.
]]>The post Top seven Voice of Customer (VoC) tools for 2025 appeared first on AI News.
]]>VoC tools are specialised software applications designed to collect, analyse, and interpret customer feedback. Feedback can come from various sources, including surveys, social media, direct customer interactions, and product reviews. The primary goal of the tools is to build a comprehensive understanding of customer sentiment, pain points, and preferences.
VoC tools let organisations gather qualitative and quantitative data, translating the voice of their customers into actionable insights. By implementing these tools, businesses can achieve a deeper understanding of their customers, leading to informed decision-making and ultimately, enhanced customer loyalty.
Here are the top seven VoC tools to consider in 2025, each offering unique features and functions to help you capture the voice of your customers effectively:
Revuze is an AI-driven VoC tool that focuses on extracting actionable insights from customer feedback, reviews, and surveys.
Key features:
Benefits: Revuze empowers businesses to turn large amounts of feedback into strategic insights, enhancing decision-making and customer engagement.
Satisfactory is a user-friendly VoC tool that emphasises customer feedback collection through satisfaction surveys and interactive forms.
Key features:
Benefits: Satisfactory helps businesses quickly gather customer feedback, allowing for immediate action to improve customer satisfaction and experience.
GetFeedback offers a streamlined platform for creating surveys and collecting customer insights, designed for usability across various industries.
Key features:
Benefits: GeTFEEDBACK provides actionable insights while ensuring an engaging experience for customers participating in surveys.
Chattermill focuses on analysing customer feedback through sophisticated AI and machine learning algorithms, turning unstructured data into actionable insights.
Key features:
Benefits: Chattermill enables businesses to react quickly to customer feedback, enhancing their responsiveness and improving overall service quality.
Skeepers is designed for brands looking to amplify the customer voice by combining feedback gathering and brand advocacy functions.
Key features:
Benefits: Skeepers helps brands transform customer insights into powerful endorsements, boosting brand reputation and fostering trust.
Medallia is an established leader in the VoC space, providing an extensive platform for capturing feedback from various touchpoints throughout the customer journey.
Key features:
Benefits: Medallia’s comprehensive suite offers valuable tools for organisations aiming to transform customer feedback into strategic opportunities.
InMoment combines customer feedback across all channels, providing organisations with insights to enhance customer experience consistently.
Key features:
Benefits: With InMoment, businesses can create a holistic view of the customer experience, driving improvements across the organisation.
Choosing the right VoC tool involves several considerations:
The post Top seven Voice of Customer (VoC) tools for 2025 appeared first on AI News.
]]>The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.
]]>The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the UK, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding.
According to the study, while AI is becoming integral to modern vehicles, European consumers remain hesitant about its implementation and value proposition.
The study found that 48 percent of Chinese respondents view in-car AI predominantly as an opportunity, while merely 23 percent of European respondents share this optimistic outlook. In Europe, 39 percent believe AI’s opportunities and risks are broadly balanced, while 24 percent take a negative stance, suggesting the risks outweigh potential benefits.
Understanding of AI technology also varies significantly by region. While over 80 percent of Chinese respondents claim to understand AI’s use in cars, this figure drops to just 54 percent among European drivers, highlighting a notable knowledge gap.
Marcus Willand, Partner at MHP and one of the study’s authors, notes: “The figures show that the prospect of greater safety and comfort due to AI can motivate purchasing decisions. However, the European respondents in particular are often hesitant and price-sensitive.”
The willingness to pay for AI features shows an equally stark divide. Just 23 percent of European drivers expressed willingness to pay for AI functions, compared to 39 percent of Chinese drivers. The study suggests that most users now expect AI features to be standard rather than optional extras.
Dr Nils Schaupensteiner, Associated Partner at MHP and study co-author, said: “Automotive companies need to create innovations with clear added value and develop both direct and indirect monetisation of their AI offerings, for example through data-based business models and improved services.”
Despite these challenges, traditional automotive manufacturers maintain a trust advantage over tech giants. The study reveals that 64 percent of customers trust established car manufacturers with AI implementation, compared to 50 percent for technology firms like Apple, Google, and Microsoft.
The research identified several key areas where AI could provide significant value across the automotive industry’s value chain, including pattern recognition for quality management, enhanced data management capabilities, AI-driven decision-making systems, and improved customer service through AI-powered communication tools.
“It is worth OEMs and suppliers considering the opportunities offered by the new technology along their entire value chain,” explains Augustin Friedel, Senior Manager and study co-author. “However, the possible uses are diverse and implementation is quite complex.”
The study reveals that while up to 79 percent of respondents express interest in AI-powered features such as driver assistance systems, intelligent route planning, and predictive maintenance, manufacturers face significant challenges in monetising these capabilities, particularly in the European market.
See also: MIT breakthrough could transform robot training
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.
]]>The post How to use AI-driven speech analytics in contact centres appeared first on AI News.
]]>In contact centres, speech analytics tools helps:
How does speech analytics driven by AI differ from the traditional one? What benefits can contact centres and businesses receive from it? Find the answers in this article.
They differ in several key aspects:
Here is a list of common technologies driven by artificial intelligence. They are being used to optimise and improve the performance of contact centres and the applications they run:
Artificial intelligence is a branch of computer technology that develops computer programs to solve complex problems by simulating behavior associated with the behaviour of intelligent beings. AI is able to reason, learn, solve issues, and self-correct.
Machine learning is a subsection of AI that teaches computers through experience rather than additional programming. It is a method of data analysis that, without the need for programming, finds patterns in data and forecasts future events using statistical algorithms.
Natural language processing allows a computer to understand spoken or written language. It can analyse syntax and semantics. In determining meaning and developing suitable answers, this is helpful.
For example, it processes verbal commands given to intelligent virtual operators, virtual assistants that staff work with, or voice menus. Sentiment analysis is another application for this technology. More advanced natural language processing can “learn” to take into account context and read sarcasm, humor, and a variety of different human emotions.
A part of natural language processing called natural language understanding enables a computer to comprehend written or spoken language. Grammatical structure, syntax, and semantics of a sentence can all be examined using it. This helps in deciphering meaning and creating suitable answers.
Predictive analytics uses machine learning, data mining, and statistical analysis techniques to analyse data and identify relationships, patterns, and trends. One can create a predictive model using such data. It forecasts the possibility of a given thing happening, the tendency to do something, and their possible consequences.
Software for speech analytics gathers and examines data from conversations with customers. Transcripts of phone conversations, dashboards, and reports can all be created using the gathered data.
Agent productivity, customer satisfaction, call volume, and other metrics are all shown in real time to contact centre management through dashboards. Call transcripts are recordings of conversations in text format used for training and quality control of service.
Speech analysis is most often carried out in the following stages:
A recording of a conversation that needs to be analysed.
It enables you to more clearly pinpoint issues. For example, if the paths intersect in a conversation between a manager and a client, one interlocutor interrupts the other.
This step helps to obtain a text version of the conversation that will be used for subsequent research.
Different text processing techniques are applied to the resultant text to examine it. These include of finding tags and themes, marking words and phrases, and assessing the tone of the text. The program also processes terms, dialogues, and discussion.
By terms, topic, tone of emotion, or other parameters.
By charts, graphs, heat maps, and other visuals. The program will clearly show the results achieved.
During this phase, judgments are made, trends are found, important discoveries are highlighted, and data is interpreted.
The system allows you to record calls and create detailed, complete reports, which will allow you to identify errors in work and find additional points of growth. This information will help develop the project and increase the average bill with the right choice of promotion tools and budget savings.
Depending on the company size, industry, size of the contact centre, and other factors, different benefits of speech analytics will come to the fore. The universal advantages are the following:
Quality control teams in call centres check an average of two to four operator calls per month. Businesses may quickly validate up to 100% of calls with speech analytics.
Various interaction metrics can be analysed with the use of speech analytics:
Speech analytics tools are able to pinpoint the areas in which agents’ quality scores are lagging. Following that, it offers useful data to boost productivity.
Supervisors may provide agents individualised feedback more quickly with faster analysis and 100% call coverage. Many contact centres have begun implementing AI assistants to give agents real-time suggestions.
Speech analytics reduces the time for verification processes. Contact centres can handle large call volumes and enhance operational efficiency with its help.
Large-scale customer self-service capabilities for common queries are provided by speech-to-text and text-to-speech voice assistants. Resources for agents to handle more complicated scenarios are freed up.
Programs for individualised agent training can be developed by managers and workforce development teams. Because each agent’s call performance and attributes are advanced assessed, it becomes feasible.
Speech analytics offers thorough insight into the requirements of the consumer. Teams can find elements of a satisfying customer experience by using sentiment analysis. Or indicators of a negative customer experience to influence the customer experience and lifecycle.
Words and phrases used in consumer interactions can be found via speech analytics. Problem-call information can be instantly sent to supervisors by email or instant messenger. Managers are able to address challenging issues in a timely manner because of notifications. After that, they use reports and dashboards to evaluate the effectiveness of their decisions.
Speech analytics can determine a speaker’s emotions at a given moment by considering speech characteristics such as voice volume and pitch. Contact centres can use this information to determine a customer’s general opinion of the business.
Contact centres handle a large amount of personal and financial information. There is a risk of data breaches, unauthorised access, and misuse of customer information, which can lead to regulatory penalties and a loss of customer trust.
How to address:
Contact centres need to put strong data security procedures in place. These are the following:
It helps identify and address vulnerabilities. Also, you can employ solutions with built-in security features.
AI-based voice analytics implementation can need a large financial outlay. Such costs include the following:
How to address:
Contact centres should start with an ROI analysis. They ought to project possible cost reductions as well as increased income. Phased implementing modifications can assist in distributing costs. It lessens the financial load in the short term. You can also implement cloud-based solutions—it lowers up-front expenses because these are usually pay-as-you-go.
Deploying advanced AI technologies and their integration with existing systems can be technically demanding and require specialised knowledge.
How to address:
Implementation complexity can be decreased by collaborating with seasoned suppliers that have a solid track record. These vendors can provide end-to-end services, including integration, training, and ongoing support.
Statistics show that mundane duties take up almost half of a contact centre agent’s working hours. The introduction of modern speech analytics services significantly optimises processes and allows you to obtain analytical data. Based on this data, you can develop a strategy for the further development of the company and improve relationships with customers, forming their loyalty.
The post How to use AI-driven speech analytics in contact centres appeared first on AI News.
]]>The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.
]]>Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency.
Chandana Gopal, research director, Future of Intelligence, IDC, said: “SAS is evolving its portfolio to meet wider user needs and capture market share with innovative new offerings,
“An area that is ripe for SAS is productising models built on SAS’ core assets, talent and IP from its wealth of experience working with customers to solve industry problems.”
In today’s market, the consumption of models is primarily focused on large language models (LLMs) for generative AI. In reality, LLMs are a very small part of the modelling needs of real-world production deployments of AI and decision making for businesses. With the new offering, SAS is moving beyond LLMs and delivering industry-proven deterministic AI models for industries that span use cases such as fraud detection, supply chain optimization, entity management, document conversation and health care payment integrity and more.
Unlike traditional AI implementations that can be cumbersome and time-consuming, SAS’ industry-specific models are engineered for quick integration, enabling organisations to operationalise trustworthy AI technology and accelerate the realisation of tangible benefits and trusted results.
Expanding market footprint
Organisations are facing pressure to compete effectively and are looking to AI to gain an edge. At the same time, staffing data science teams has never been more challenging due to AI skills shortages. Consequently, businesses are demanding agility in using AI to solve problems and require flexible AI solutions to quickly drive business outcomes. SAS’ easy-to-use, yet powerful models tuned for the enterprise enable organisations to benefit from a half-century of SAS’ leadership across industries.
Delivering industry models as packaged offerings is one outcome of SAS’ commitment of $1 billion to AIpowered industry solutions. As outlined in the May 2023 announcement, the investment in AI builds on SAS’ decades-long focus on providing packaged solutions to address industry challenges in banking, government, health care and more.
Udo Sglavo, VP for AI and Analytics, SAS, said: “Models are the perfect complement to our existing solutions and SAS Viya platform offerings and cater to diverse business needs across various audiences, ensuring that innovation reaches every corner of our ecosystem.
“By tailoring our approach to understanding specific industry needs, our frameworks empower businesses to flourish in their distinctive Environments.”
Bringing AI to the masses
SAS is democratising AI by offering out-of-the-box, lightweight AI models – making AI accessible regardless of skill set – starting with an AI assistant for warehouse space optimisation. Leveraging technology like large language models, these assistants cater to nontechnical users, translating interactions into optimised workflows seamlessly and aiding in faster planning decisions.
Sgvalo said: “SAS Models provide organisations with flexible, timely and accessible AI that aligns with industry challenges.
“Whether you’re embarking on your AI journey or seeking to accelerate the expansion of AI across your enterprise, SAS offers unparalleled depth and breadth in addressing your business’s unique needs.”
The first SAS Models are expected to be generally available later this year.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.
]]>The post Meta’s open-source speech AI models support over 1,100 languages appeared first on AI News.
]]>In response to this problem, the Meta-led Massively Multilingual Speech (MMS) project has made remarkable strides in expanding language coverage and improving the performance of speech recognition and synthesis models.
By combining self-supervised learning techniques with a diverse dataset of religious readings, the MMS project has achieved impressive results in growing the ~100 languages supported by existing speech recognition models to over 1,100 languages.
To address the scarcity of labelled data for most languages, the MMS project utilised religious texts, such as the Bible, which have been translated into numerous languages.
These translations provided publicly available audio recordings of people reading the texts, enabling the creation of a dataset comprising readings of the New Testament in over 1,100 languages.
By including unlabeled recordings of other religious readings, the project expanded language coverage to recognise over 4,000 languages.
Despite the dataset’s specific domain and predominantly male speakers, the models performed equally well for male and female voices. Meta also says it did not introduce any religious bias.
Training conventional supervised speech recognition models with just 32 hours of data per language is inadequate.
To overcome this limitation, the MMS project leveraged the benefits of the wav2vec 2.0 self-supervised speech representation learning technique.
By training self-supervised models on approximately 500,000 hours of speech data across 1,400 languages, the project significantly reduced the reliance on labelled data.
The resulting models were then fine-tuned for specific speech tasks, such as multilingual speech recognition and language identification.
Evaluation of the models trained on the MMS data revealed impressive results. In a comparison with OpenAI’s Whisper, the MMS models exhibited half the word error rate while covering 11 times more languages.
Furthermore, the MMS project successfully built text-to-speech systems for over 1,100 languages. Despite the limitation of having relatively few different speakers for many languages, the speech generated by these systems exhibited high quality.
While the MMS models have shown promising results, it is essential to acknowledge their imperfections. Mistranscriptions or misinterpretations by the speech-to-text model could result in offensive or inaccurate language. The MMS project emphasises collaboration across the AI community to mitigate such risks.
You can read the MMS paper here or find the project on GitHub.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Meta’s open-source speech AI models support over 1,100 languages appeared first on AI News.
]]>The post AI21 Labs raises $64M to help it compete against OpenAI appeared first on AI News.
]]>Competition in NLP (Natural Language Processing) is heating up. OpenAI is currently seen as the industry leader with its GPT-3 model but rivals are gaining traction.
Investors see AI21 Labs as one of the most promising contenders.
“We completed this round during a period of market uncertainty, which highlights the confidence our investors have in AI21’s vision to change the way people consume and produce information,” said Ori Goshen, Co-Founder and Co-CEO of AI21 Labs.
“The funding will allow us to accelerate the company’s global growth while continuing to develop advanced technology in the field of natural language processing. We are looking forward to growing our team and our offerings.”
The latest funding round was led by Ahren and brings AI21 Labs’ valuation to $664 million.
“NLP has reached a critical inflection point and AI21 has developed unique infrastructure and products to successfully serve a large and rapidly growing market” commented Alice Newcombe-Ellis, Founding and General Partner of Ahren.
“We consider this team to be of the highest calibre, both technically and commercially, leading a differentiated company in a transformative space.”
AI21 Labs’ Jurassic-1 Jumbo model is around the size of GPT-3. The company has been gradually building products around it, including its ‘AI-as-a-Service’ platform AI21 Studio.
One of the consumer-facing products launched by AI21 Labs is Wordtune, an AI writing tool with millions of active users that was chosen by Google as one of its favourite extensions for 2021.
Another product, Wordtune Read, is able to analyse and summarise documents in seconds—enabling users to read long and complex text quickly and efficiently.
A survey last year by John Snow Labs found that 60 percent of budgets for NLP technologies increased by at least 10 percent in 2020, while 33 percent reported a 30 percent increase and 15 percent said their budget more than doubled.
NLP specialists like AI21 Labs are set to benefit greatly from the clear appetite for such technologies over the coming years.
(Image Credit: AI21 Labs)
Related: Meta’s NLLB-200 AI model improves translation quality by 44%
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI21 Labs raises $64M to help it compete against OpenAI appeared first on AI News.
]]>The post IRS expands voice bot options for faster service appeared first on AI News.
]]>“This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We continue to look for ways to better assist taxpayers, and that includes helping people avoid waiting on hold or having to make a second phone call to get what they need. The expanded voice bots are another example of how technology can help the IRS provide better service to taxpayers.”
Voice bots run on software powered by artificial intelligence, which enables a caller to navigate an interactive voice response. The IRS has been using voice bots on numerous toll-free lines since January, enabling taxpayers with simple payment or notice questions to get what they need quickly and avoid waiting. Taxpayers can always speak with an English- or Spanish-speaking IRS telephone representative if needed.
Eligible taxpayers who call the Automated Collection System (ACS) and Accounts Management toll-free lines and want to discuss payment plan options can authenticate or verify their identities through a personal identification number (PIN) creation process. Setting up a PIN is easy: Taxpayers will need their most recent IRS bill and some basic personal information to complete the process.
“To date, the voice bots have answered over three million calls. As we add more functions for taxpayers to resolve their issues, I anticipate many more taxpayers getting the service they need quickly and easily,” said Darren Guillot, IRS deputy commissioner of Small Business/Self Employed Collection & Operations Support.
Additional voice bot service enhancements are planned in 2022 that will allow authenticated individuals (taxpayers with established or newly created PINs) to get:
In addition to the payment lines, voice bots help people who call the Economic Impact Payment (EIP) toll-free line with general procedural responses to frequently asked questions. The IRS also added voice bots for the Advance Child Tax Credit toll-free line in February to provide similar assistance to callers who need help reconciling the credits on their 2021 tax return.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post IRS expands voice bot options for faster service appeared first on AI News.
]]>The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.
]]>The system, first reported by Protocol, claims to scan users’ faces and their speech to determine their emotions.
Zoom detailed the system more in a blog post last month. The company says ‘Zoom IQ’ will be particularly useful for helping salespeople improve their pitches based on the emotions of call participants.
Naturally, the system is seen as rather dystopian and has received its fair share of criticism.
On Wednesday, over 25 rights groups sent a joint letter to Zoom CEO Eric Yuan. The letter urges Zoom to cease research on emotion-based AI.
The letter’s signatories include the American Civil Liberties Union (ACLU), Muslim Justice League, and Access Now.
One of the key concerns is that emotion-detecting AI could be used for things like hiring or financial decisions; such as whether to grant loans. That has the possibility to increase existing inequalities.
“Results are not intended to be used for employment decisions or other comparable decisions. All recommended ranges for metrics are based on publicly available research,” Zoom explained.
Zoom IQ tracks metrics including:
Esha Bhandari, Deputy Director of the ACLU Speech, Privacy, and Technology Project, called emotion-detecting AI “creepy” and “a junk science”.
(Photo by iyus sugiharto on Unsplash)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.
]]>The post DeepMind co-founder Mustafa Suleyman launches new AI venture appeared first on AI News.
]]>LinkedIn co-founder Reid Hoffman is joining Suleyman on the venture.
“Reid and I are excited to announce that we are co-founding a new company, Inflection AI,” wrote Suleyman in a statement.
“Inflection will be an AI-first consumer products company, incubated at Greylock, with all the advantages and expertise that come from being part of one of the most storied venture capital firms in the world.”
Dr Karén Simonyan, another former DeepMind AI expert, will serve as Inflection AI’s chief scientist and its third co-founder.
“Karén is one of the most accomplished deep learning leaders of his generation. He completed his PhD at Oxford, where he designed VGGNet and then sold his first company to DeepMind,” continued Suleyman.
“He created and led the deep learning scaling team and played a key role in such breakthroughs as AlphaZero, AlphaFold, WaveNet, and BigGAN.”
Inflection AI will focus on machine learning and natural language processing.
“Recent advances in artificial intelligence promise to fundamentally redefine human-machine interaction,” explains Suleyman.
“We will soon have the ability to relay our thoughts and ideas to computers using the same natural, conversational language we use to communicate with people. Over time these new language capabilities will revolutionise what it means to have a digital experience.”
Interest in natural language processing is surging. This month, Microsoft completed its $19.7 billion acquisition of Siri voice recognition engine creator Nuance.
Suleyman departed Google in January 2022 following an eight-year stint at the company.
While at Google, Suleyman was placed on administrative leave following bullying allegations. During a podcast, he said that he “really screwed up” and was “very sorry about the impact that caused people and the hurt people felt.”
Suleyman joined venture capital firm Greylock after leaving Google.
“There are few people who are as visionary, knowledgeable and connected across the vast artificial intelligence landscape as Mustafa,” wrote Hoffman, a Greylock partner, in a post at the time.
“Mustafa has spent years thinking about how technological advances impact society, and he cares deeply about the ethics and governance supporting new AI systems.”
Inflection AI was incubated by Greylock. Suleyman and Hoffman will both remain venture partners at the company.
Suleyman promises that more details about Inflection AI’s product plans will be provided over the coming months.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post DeepMind co-founder Mustafa Suleyman launches new AI venture appeared first on AI News.
]]>The post Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ appeared first on AI News.
]]>“Completion of this significant and strategic acquisition brings together Nuance’s best-in-class conversational AI and ambient intelligence with Microsoft’s secure and trusted industry cloud offerings,” said Scott Guthrie, Executive Vice President of the Cloud + AI Group at Microsoft.
“This powerful combination will help providers offer more affordable, effective, and accessible healthcare, and help organisations in every industry create more personalised and meaningful customer experiences. I couldn’t be more pleased to welcome the Nuance team to our Microsoft family.”
Nuance became a household name (in techie households, anyway) for creating the speech recognition engine that powers Apple’s smart assistant, Siri. However, Nuance has been in the speech recognition business since 2001 when it was known as ScanSoft.
While it may not have made many big headlines in recent years, Nuance has continued to make some impressive advancements—which caught the attention of Microsoft.
Microsoft announced its intention to acquire Nuance for $19.7 billion last year, in the company’s largest deal after its $26.2 billion acquisition of LinkedIn (both deals would be blown out the water by Microsoft’s proposed $70 billion purchase of Activision Blizzard).
The proposed acquisition of Nuance caught the attention of global regulators. It was cleared in the US relatively quickly, while the EU’s regulator got in the festive spirit and cleared the deal just prior to last Christmas. The UK’s Competition and Markets Authority finally gave it a thumbs-up last week.
Regulators examined whether there may be anti-competition concerns in some verticals where both companies are active, such as healthcare. However, after investigation, the regulators determined that competition shouldn’t be affected by the deal.
The EU, for example, determined that “competing transcription service providers in healthcare do not depend on Microsoft for cloud computing services” and that “transcription service providers in the healthcare sector are not particularly important users of cloud computing services”.
Furthermore, the EU’s regulator concluded:
The companies appear keen to ensure that people are aware the deal is about more than just healthcare.
“Combining the power of Nuance’s deep vertical expertise and proven business outcomes across healthcare, financial services, retail, telecommunications, and other industries with Microsoft’s global cloud ecosystems will enable us to accelerate our innovation and deploy our solutions more quickly, more seamlessly, and at greater scale to solve our customers’ most pressing challenges,” said Mark Benjamin, CEO of Nuance.
Benjamin will remain the CEO of Nuance and will report to Guthrie.
(Photo by Omid Armin on Unsplash)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ appeared first on AI News.
]]>The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.
]]>Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and EU members – believe they can lead in ethical standards.
In the draft of the EU regulations, companies that are found guilty of AI misuse face a fine of €30 million or six percent of their global turnover (whichever is greater). The risk of such fines has been criticised as driving investments away from Europe.
The EU’s draft AI regulation classifies systems into three risk categories:
Unacceptable risk systems will face a blanket ban from deployment in the EU while limited risk will require minimal oversight.
Organisations deploying high-risk AI systems would be required to have things like:
However, the cumbersome nature of the EU – requiring agreement from all member states, each with their own priorities – means that new regulations are often subject to more debate and delay than national lawmaking.
Reuters reports that two key lawmakers on Wednesday said the EU’s AI regulations will likely take over a year more to agree. The delay is primarily due to debates over whether facial recognition should be banned and who should enforce the rules.
“Facial recognition is going to be the biggest ideological discussion between the right and left,” said one lawmaker, Dragos Tudorache, in a Reuters interview.
“I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”
With leading academic institutions and more than 1,300 AI companies employing over 30,000 people, the UK is the biggest destination for AI investment in Europe and the third in the world. Between January and June 2021, global investors poured £13.5 billion into more than 1,400 “deep tech” UK private technology firms—more than Germany, France, and Israel combined.
In September 2021, the UK published its 10-year National Artificial Intelligence Strategy in a bid to secure its European AI leadership. Governance plays a large role in the strategy.
“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” commented DCMS Minister Chris Philp.
“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”
As part of its strategy, the UK is creating an ‘AI Standards Hub’ to coordinate the country’s engagement in establishing global rules and is working with The Alan Turing Institute to update guidance on AI ethics and safety.
“We are proud of creating a dynamic, collaborative community of diverse researchers and are growing world-leading capabilities in responsible, safe, ethical, and inclusive AI research and innovation,” said Professor Sir Adrian Smith, Chief Executive of The Alan Turing Institute.
Striking a balance between innovation-stifling overregulation and ethics-compromising underregulation is never a simple task. It will be interesting to observe how AI regulations in Europe will differ across the continent and beyond.
(Photo by Christian Lue on Unsplash)
Related: British intelligence agency GCHQ publishes ‘Ethics of AI’ report
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.
]]>