ChatGPT – Artifex.News https://artifexnews.net Stay Connected. Stay Informed. Fri, 30 Aug 2024 23:54:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://artifexnews.net/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png ChatGPT – Artifex.News https://artifexnews.net 32 32 OpenAI Names Political Veteran Chris Lehane As Head Of Global Policy: Report https://artifexnews.net/openai-names-political-veteran-chris-lehane-as-head-of-global-policy-report-6456775/ Fri, 30 Aug 2024 23:54:22 +0000 https://artifexnews.net/openai-names-political-veteran-chris-lehane-as-head-of-global-policy-report-6456775/ Read More “OpenAI Names Political Veteran Chris Lehane As Head Of Global Policy: Report” »

]]>

OpenAI has named political veteran Chris Lehane as its vice president of global policy.

San Francisco:

ChatGPT-maker OpenAI has named political veteran Chris Lehane as its vice president of global policy, the New York Times reported on Friday.

Lehane, who is a member of the executive team at OpenAI, was a former policy chief for Airbnb and a member of the Clinton White House.

OpenAI did not immediately respond to a Reuters request for comment.

The appointment comes as Apple and chip giant Nvidia are reportedly in talks to invest in OpenAI as part of a new fundraising round that could value the Microsoft-backed startup above $100 billion.

Earlier in the day, the Financial Times reported that OpenAI is weighing changes to its corporate structure to become more investor-friendly.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
OpenAI In Talks To Raise Funding At Over $100 Billion Valuation, WSJ Reports https://artifexnews.net/openai-in-talks-to-raise-funding-at-over-100-billion-valuation-wsj-reports-6442704/ Thu, 29 Aug 2024 06:02:58 +0000 https://artifexnews.net/openai-in-talks-to-raise-funding-at-over-100-billion-valuation-wsj-reports-6442704/ Read More “OpenAI In Talks To Raise Funding At Over $100 Billion Valuation, WSJ Reports” »

]]>

OpenAI backer Microsoft is also expected to put in money.

OpenAI, the startup behind the popular ChatGPT, is reportedly in discussions to raise billions of dollars in a new funding round that could see it valued at above $100 billion, the Wall Street Journal reported on Wednesday.

The funding round is expected to be led by venture capital firm Thrive Capital, which is poised to invest approximately $1 billion, the report said, citing people familiar with the matter.

Tech giant and OpenAI backer Microsoft is also expected to put in money, it added.

OpenAI, Microsoft and Thrive Capital did not immediately respond to Reuters’ requests for comment.

ChatGPT, a chatbot that can generate human-like responses based on user prompts, has driven AI’s popularity and fueled a meteoric rise in the valuation of the San Francisco-based firm.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Waiting for response to load…



Source link

]]>
New AI Chip “Will Revolutionise ChatGPT”, Claims Startup Founded By Harvard Dropouts https://artifexnews.net/chatgpt-ai-chip-sohu-new-ai-chip-in-making-claims-it-will-revolutionise-chatgpt-6116225/ Tue, 16 Jul 2024 06:32:34 +0000 https://artifexnews.net/chatgpt-ai-chip-sohu-new-ai-chip-in-making-claims-it-will-revolutionise-chatgpt-6116225/ Read More “New AI Chip “Will Revolutionise ChatGPT”, Claims Startup Founded By Harvard Dropouts” »

]]>

Sohu, the new chipset will run only transformer AI models

New Delhi:

The reason why you are able to ask ChatGPT “how to eat a mango without spilling it” and much more is Nvidia’s H100 and B200 Graphics Processing Units (GPUs). These magical chipsets that power AI chatbots have propelled Nvidia to become the frontrunners of the AI hardware industry, with market capitalization reaching the 3 trillion-dollar mark-more than Microsoft and Apple last month.

But now, a relatively young startup founded by two Harvard dropouts have set their eyes on their share of the AI hardware pie. Etched, the California based startup, is looking to disrupt the AI chipset market with their transformer ASIC (Application Specific Integrated Circuits) chip called Sohu.

Sohu claims to be 20 times faster in running transformers like ChatGPT than Nvidia’s flagship- H100. The B200, which is the more powerful Nvidia offering than H100, is reportedly 10 times slower then Sohu, according to the claims made by the company based on emulation tests.

Source: X/@Etched

Source: X/@Etched

Sohu is taking an entirely different approach to providing high computational power to run billions of parameters (Variables that are used in training an AI Model) for transformer models. Unlike GPUs that can do multiple computationally heavy tasks (like rendering graphics in real time), Etched is choosing to create a specialised chip that caters to only transformer AI models – ones that run ChatGPT, Sora (OpenAI’s text to video AI model) and Google’s Gemini.

What this means is it cannot run other AI models like Convolutional Neural Networks (used for image recognition). This opens up the possibility of exploring new AI products by developers that were till now not possible due to limited power on GPUs.

For example, Sohu can potentially lead to a real-time translator that can hear and read Hindi, Gujarati or Tamil and respond back in French, English and German. Of course, such multimodal and multilingual translation needs more than just the computational power, but in theory, it opens up the possibility.

Another multimodal application of transformers that the chipset might tap into is integrating visual and language areas. This essentially means that such a model will understand both text and images simultaneously, throwing open the possibility of visual questions and answers- like an interview.

But all of this remains a theory. Etched has raised 120 million USD on 25 June to make it a reality, with a real timeline as to an actual release of the Sohu ASIC remaining elusive.

Etched has claimed it already has “tens of million dollars” worth of hardware reserved in preorders. The company has also secured a deal with TSMC (Taiwan Semiconductor Manufacturing Company) to fabricate the 4-nanometer chip, promising the deal is going to help “ramp up our first year of production.”

Waiting for response to load…





Source link

]]>
AI Improves Creativity In Stories, Reduces Diversity: Study https://artifexnews.net/chatgpt-genai-ai-improves-creativity-in-stories-reduces-diversity-study-6110674/ Mon, 15 Jul 2024 11:16:12 +0000 https://artifexnews.net/chatgpt-genai-ai-improves-creativity-in-stories-reduces-diversity-study-6110674/ Read More “AI Improves Creativity In Stories, Reduces Diversity: Study” »

]]>

The study was published in the journal Science Advances

New Delhi:

Stories created with help of ChatGPT are more creative, engaging their audience with more plot twists, compared to those by writers not using the tool, according to research.

However, researchers also found that diversity in stories by writers using generative artificial intelligence’s (GenAI) suffered, therefore increasing the risk of “collective novelty”.

GenAI can create content — text, image, audio or video — and is based on large language models, which are trained on massive amounts of text data and can, therefore, process, interpret and respond to requests in the natural language that humans use to communicate.

The authors of the study, published in the journal Science Advances, found that writers inherently more creative benefitted the least from ideas generated by ChatGPT, while those less creative became more creative due to ideas suggested by the GenAI model.

Therefore, AI “effectively equalised creativity” among all the writers, the team of researchers said.

“While these results point to an increase in individual creativity, there is risk of losing collective novelty. If the publishing industry were to embrace more generative AI-inspired stories, our findings suggest that the stories would become less unique in aggregate and more similar to each other,” said study author Anil Doshi, assistant professor at the University College London’s School of Management, UK.

For the study, 300 participants were tasked with writing a short, eight-sentence story (a ‘microstory’) for a target audience of young adults, of whom 600 were recruited to judge the writers’ work.

The writers were divided into three groups. The first was allowed no help from AI, while the second was allowed to take one idea, along with the first three sentences of the story, created by ChatGPT for inspiration. The third group was allowed to choose from up to five AI-created story ideas.

The authors found that the work of writers taking the AI’s help were over 8-9 per cent more novel, compared to that of writers not relying on the AI. Along with novelty, the microstories were judged for “usefulness” — were they engaging enough for the audience, and could they be developed and potentially published?

The team also found that the less creative writers ‘became’ more so, with AI making their stories more novel by 10.7 per cent and more useful by 11.5 per cent, compared to the stories of writers not taking AI’s help.

The AI made the work by less creative writers up to 26.6 per cent better, 22.6 per cent more enjoyable and 15.2 per cent less boring, the authors found.

The inherent creativity of the writers was measured using a psychological test — Divergent Association Task (DAT).

Divergent thinking, which allows one to think of multiple solutions to a problem spontaneously, is known to be important to creativity.

Further, the authors found that among the writers using GenAI’s ideas, the stories they produced were 10.7 per cent more similar, compared to the writers not using AI.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
AI accessibility? Blind gamer puts ChatGPT to the test https://artifexnews.net/article68392785-ece/ Thu, 11 Jul 2024 16:00:09 +0000 https://artifexnews.net/article68392785-ece/ Read More “AI accessibility? Blind gamer puts ChatGPT to the test” »

]]>

Japanese eSports gamer Mashiro is blind and often relies on a companion to get around Tokyo — but he hopes that artificial intelligence, hailed as a promising tool for people with disabilities, can help him travel alone.

The 26-year-old ‘Street Fighter’ player put the latest version of AI chatbot ChatGPT to the test on his way to a stadium for a recent Para eSports meet-up.

“I can’t participate in an event like this without someone to rely on,” he told AFP. “Also, sometimes I just want to get around by myself without speaking to other people.

“So if I can use technology like ChatGPT to design my own special needs support, that would be great.”

This year, the US firm OpenAI, released GPT-4o, which understands voice, text and image commands in several languages.

The generative gadget, along with others such as Google’s Gemini, is part of a fast-growing field that experts say could make education, employment and everyday services more accessible.

Following the streets’ tactile paving, Masahiro Fujimoto – who goes by his online handle Mashiro – used his stick adorned with a small monkey mascot to find his way from the station.

As he went, he spoke to GPT-4o like a friend, receiving its answers through an earpiece in one ear, leaving the other side free to listen out for cars.

Having asked for basic directions, he added: “In fact, I am blind, so could you give me further details for blind people?”

“Of course,” the bot replied. “You might notice an increase in crowd noise and the sound of activities as you get closer.”

The journey, 20 minutes for sighted people, took Mashiro around four times as long with several U-turns.

When it started to rain heavily, he requested help from his friend, who is partially sighted, to finish the journey.

“Arrival!” finally shouted Mashiro, who has microphthalmos and has been blind since birth, using only sound to demolish his opponents on ‘Street Fighter 6’.

AI can cater to specific needs better than “one-size-fits-all” assistive products and technologies, said Youngjun Cho, an associate professor in computer science at University College London (UCL).

“Its potential is enormous,” said Cho, who also works at UCL’s Global Disability Innovation Hub.

“I envisage that this can empower many individuals and promote independence.”

People with hearing loss can, for example, use AI speech-to-text transcription, while chatbots can help format a resume for someone with learning disabilities.

Some tools for visually impaired people, such as Seeing AI, Envision AI and TapTapSee, describe phone camera images.

Danish app Be My Eyes, where real-life volunteers help via live chat, is working with OpenAI to develop a “digital visual assistant”.

But Masahide Ishiki, a Japanese expert in disability and digital accessibility, warned it can be “tricky” to catch mistakes from ChatGPT, which “replies so naturally”.

“The next objective (for generative AI) is to improve the accuracy of real-time visual recognition, to ultimately reach capabilities close to that of a human eye,” said Ishiki, who is blind.

Marc Goblot of the Tech for Disability group also cautioned that AI is trained on “very mainstream datasets” which are “not representative of the full spectrum of people’s perceptions and especially the margins”.

Mashiro said ChatGPT’s limited recognition of Japanese words and locations made his AI-assisted journey more challenging.

Although the experiment was “a lot of fun”, it would have been easier if ChatGPT was connected to a map tool, said the gamer, who travelled around Europe last year using Google Maps and help from those around him.

He has already decided on his next travel destination: Yakushima rainforest island in southern Japan.

“I want to experience whatever happens when travelling somewhere like that,” he said.



Source link

]]>
ChatGPT Was Asked For Legal Advice https://artifexnews.net/chatgpt-was-asked-for-legal-advice-5-reasons-why-it-was-a-bad-idea-5799239/ Sun, 02 Jun 2024 07:33:20 +0000 https://artifexnews.net/chatgpt-was-asked-for-legal-advice-5-reasons-why-it-was-a-bad-idea-5799239/ Read More “ChatGPT Was Asked For Legal Advice” »

]]>

The first answers the chatbots provided were often based on American law.

At some point in your life, you are likely to need legal advice. A survey carried out in 2023 by the Law Society, the Legal Services Board and YouGov found that two-thirds of respondents had experienced a legal issue in the past four years. The most common problems were employment, finance, welfare and benefits and consumer issues.

But not everyone can afford to pay for legal advice. Of those survey respondents with legal problems, only 52% received professional help, 11% had assistance from other people such as family and friends and the remainder received no help at all.

Many people turn to the internet for legal help. And now that we have access to artificial intelligence (AI) chatbots such as ChatGPT, Google Bard, Microsoft Co-Pilot and Claude, you might be thinking about asking them a legal question.

These tools are powered by generative AI, which generates content when prompted with a question or instruction. They can quickly explain complicated legal information in a straightforward, conversational style, but are they accurate?

We put the chatbots to the test in a recent study published in the International Journal of Clinical Legal Education. We entered the same six legal questions on family, employment, consumer and housing law into ChatGPT 3.5 (free version), ChatGPT 4 (paid version), Microsoft Bing and Google Bard. The questions were ones we typically receive in our free online law clinic at The Open University Law School.

We found that these tools can indeed provide legal advice, but the answers were not always reliable or accurate. Here are five common mistakes we observed:

1. Where is the law from?

The first answers the chatbots provided were often based on American law. This was often not stated or obvious. Without legal knowledge, the user would likely assume the law related to where they live. The chatbot sometimes did not explain that law differs depending on where you live.

This is especially complex in the UK, where laws differ between England and Wales, Scotland and Northern Ireland. For example, the law on renting a house in Wales is different to Scotland, Northern Ireland and England, while Scottish and English courts have different procedures to deal with divorce and the ending of a civil partnership.

If necessary, we used one additional question: “is there any English law that covers this problem?” We had to use this instruction for most of the questions, and then the chatbot produced an answer based on English law.

2. Out of date law

We also found that sometimes the answer to our question referred to out of date law, which has been replaced by new legal rules. For example, the divorce law changed in April 2022 to remove fault-based divorce in England and Wales.

Some responses referred to the old law. AI chatbots are trained on large volumes of data – we don’t always know how current the data is, so it may not include the most recent legal developments.

3. Bad advice

We found most of the chatbots gave incorrect or misleading advice when dealing with the family and employment queries. The answers to the housing and consumer questions were better, but there were still gaps in the responses. Sometimes, they missed really important aspects of the law, or explained it incorrectly.

We found that the answers produced by the AI chatbots were well-written, which could make them appear more convincing. Without having legal knowledge, it is very difficult for someone to determine whether an answer produced is correct and applies to their individual circumstances.

Even though this technology is relatively new, there have already been cases of people relying on chatbots in court. In a civil case in Manchester, a litigant representing themselves in court reportedly presented fictitious legal cases to support their argument. They said they had used ChatGPT to find the cases.

4. Too generic

In our study, the answers didn’t provide enough detail for someone to understand their legal issue and know how to resolve them. The answers provided information on a topic rather than specifically addressing the legal question.

Interestingly, the AI chatbots were better at suggesting practical, non-legal ways to address a problem. While this can be useful as a first step to resolving an issue, it does not always work, and legal steps may be needed to enforce your rights.

5. Pay to play

We found that ChatGPT4 (the paid version) was better overall than the free versions. This risks further reinforcing digital and legal inequality.

The technology is evolving, and there may come a time when AI chatbots are better able to provide legal advice. Until then, people need to be aware of the risks when using them to resolve their legal problems. Other sources of help such as Citizens Advice will provide up to date, accurate information and are better placed to assist.

All the chatbots gave answers to our questions but, in their response, stated it was not their function to provide legal advice and recommended getting professional help. After conducting this study, we recommend the same.The Conversation

Francine Ryan, Senior Lecturer in Law and Director of the Open Justice Centre, The Open University and Elizabeth Hardie, Senior Lecturer, Law School, The Open University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



Source link

]]>
OpenAI Says AI Is “Safe Enough” As Scandals Raise Concerns https://artifexnews.net/openai-says-ai-is-safe-enough-as-scandals-raise-concerns-5716849/ Tue, 21 May 2024 23:35:12 +0000 https://artifexnews.net/openai-says-ai-is-safe-enough-as-scandals-raise-concerns-5716849/ Read More “OpenAI Says AI Is “Safe Enough” As Scandals Raise Concerns” »

]]>

Sam Altman insisted that OpenAI had put in “a huge amount of work” to ensure the safety of its models.

Seattle:

OpenAI CEO Sam Altman defended his company’s AI technology as safe for widespread use, as concerns mount over potential risks and lack of proper safeguards for ChatGPT-style AI systems.

Altman’s remarks came at a Microsoft event in Seattle, where he spoke to developers just as a new controversy erupted over an OpenAI AI voice that closely resembled that of the actress Scarlett Johansson.

The CEO, who rose to global prominence after OpenAI released ChatGPT in 2022, is also grappling with questions about the safety of the company’s AI following the departure of the team responsible for mitigating long-term AI risks.

“My biggest piece of advice is this is a special time and take advantage of it,” Altman told the audience of developers seeking to build new products using OpenAI’s technology.

“This is not the time to delay what you’re planning to do or wait for the next thing,” he added.

OpenAI is a close partner of Microsoft and provides the foundational technology, primarily the GPT-4 large language model, for building AI tools.

Microsoft has jumped on the AI bandwagon, pushing out new products and urging users to embrace generative AI’s capabilities.

“We kind of take for granted” that GPT-4, while “far from perfect…is generally considered robust enough and safe enough for a wide variety of uses,” Altman said.

Altman insisted that OpenAI had put in “a huge amount of work” to ensure the safety of its models.

“When you take a medicine, you want to know what’s going to be safe, and with our model, you want to know it’s going to be robust to behave the way you want it to,” he added.

However, questions about OpenAI’s commitment to safety resurfaced last week when the company dissolved its “superalignment” group, a team dedicated to mitigating the long-term dangers of AI.

In announcing his departure, team co-leader Jan Leike criticized OpenAI for prioritizing “shiny new products” over safety in a series of posts on X (formerly Twitter).

“Over the past few months, my team has been sailing against the wind,” Leike said.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

This controversy was swiftly followed by a public statement from Johansson, who expressed outrage over a voice used by OpenAI’s ChatGPT that sounded similar to her voice in the 2013 film “Her.”

The voice in question, called “Sky,” was featured last week in the release of OpenAI’s more human-like GPT-4o model.

In a short statement on Tuesday, Altman apologized to Johansson but insisted the voice was not based on hers.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
Scarlett Johansson “Angered” By OpenAI Chatbot Voice That Sounds “Eerily” Like Her https://artifexnews.net/scarlett-johansson-angered-by-openai-chatbot-voice-that-sounds-eerily-like-her-5710061/ Tue, 21 May 2024 03:54:22 +0000 https://artifexnews.net/scarlett-johansson-angered-by-openai-chatbot-voice-that-sounds-eerily-like-her-5710061/ Read More “Scarlett Johansson “Angered” By OpenAI Chatbot Voice That Sounds “Eerily” Like Her” »

]]>

Ms Johansson said she was shocked when she heard AI chatbot’s demo.

Hollywood actor Scarlett Johansson said that she was “shocked, angered and in disbelief” after Sam Altman’s OpenAI launched a chatbot with an “eerily similar” voice to hers, as per a report in BBC. The actress said that she had previously declined the company’s request for her voice to be used in their new ChatGPT 4.0 chatbot, which reads the material to users aloud.

Since the AI chatbot named Sky debuted last week, many users were quick to draw parallels between the chatbot’s tone and Scarlett Johansson’s in the 2013 movie ‘Her’.

OpenAI stated on X (formerly Twitter) that the AI voice will be put on hold while the company responds to “questions about how we chose the voices in ChatGPT.” The Sky voice was “not an imitation” of Ms Johansson’s, the company wrote in a blog post. They added that it was recorded by a separate professional actor, whose name they would not disclose to protect her privacy.

However, the ‘Marriage Story’ actor accused OpenAI and CEO Sam Altman of copying her voice. She said in a statement, “Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al. He said he felt that my voice would be comforting to people.”

Ms Johansson added that she declined the offer but was taken aback when she heard the demo. “When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr Altman even insinuated that the similarity was intentional, tweeting a single word “her” – a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.”

Ms Johansson stated that the circumstances “forced her to hire legal counsel,” and as a result, her attorney sent two letters to Mr Altman and OpenAI requesting an explanation of the chatbot’s voice’s creation process. She added that OpenAI then “reluctantly agreed” to remove the voice from the platform.

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected,” she said in her statement.

Meanwhile, Sam Altman said in a statement emailed to Reuters that Sky’s voice was not an imitation of Johansson, but belonged to a different professional actress. He said, “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms Johansson. Out of respect for Ms Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms Johansson that we didn’t communicate better.”

Waiting for response to load…



Source link

]]>
ChatGPT Rival ‘Claude’ Now Available In Europe, Can Be Accessed For Free https://artifexnews.net/chatgpt-rival-claude-now-available-in-europe-can-be-accessed-for-free-5659240/ Tue, 14 May 2024 05:55:20 +0000 https://artifexnews.net/chatgpt-rival-claude-now-available-in-europe-can-be-accessed-for-free-5659240/ Read More “ChatGPT Rival ‘Claude’ Now Available In Europe, Can Be Accessed For Free” »

]]>

The OpenAI rival introduced the latest version of Claude in March. (Representational)

San Francisco:

Anthropic on Monday announced that its artificial intelligence assistant “Claude” is available in Europe after launching in the United States earlier this year.

“We’re excited to announce that Claude, Anthropic’s trusted AI assistant, is now available for people and businesses across Europe to enhance their productivity and creativity,” the San Francisco-based tech startup said in a blog post.

Claude can be accessed for free in Europe online at claude.ai or using an app tailored for Apple mobile devices, according to Anthropic.

It is also available to businesses through a paid “Claude Team” subscription plan, the company added.

The OpenAI rival introduced the latest version of Claude in March.

“Claude has strong levels of comprehension and fluency in French, German, Spanish, Italian, and other European languages, allowing users to converse with Claude in multiple languages,” Anthropic said.

“Claude’s intuitive, user-friendly interface makes it easy for anyone to seamlessly integrate our advanced AI models into their workflows.”

OpenAI’s launch of ChatGPT in late 2022 sparked keen interest in generative AI that enables computers to create images, videos, computer code, or written works from simple text prompts.

Founded by former OpenAI employees, Anthropic strives to distinguish itself from its competitors by building stricter safeguards into its technology to prevent it from being misused.

Millions of people are already using Claude for an array of purposes, according to Anthropic co-founder and chief executive Dario Amodei. “I can’t wait to see what European people and companies will be able to create with Claude,” Amodei said.

An application programming interface for developers interested in using Anthropic AI has been accessible in Europe since the start of this year, according to the company.

Anthropic has raised at least $7 billion in funding since 2021, with its backers including Amazon, Google, and Salesforce.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
British School Employs AI Robot As Principal Headteacher For Enhanced Decision-Making https://artifexnews.net/british-school-employs-ai-robot-as-principal-headteacher-for-enhanced-decision-making-4487727/ Tue, 17 Oct 2023 04:34:39 +0000 https://artifexnews.net/british-school-employs-ai-robot-as-principal-headteacher-for-enhanced-decision-making-4487727/ Read More “British School Employs AI Robot As Principal Headteacher For Enhanced Decision-Making” »

]]>

Cottesmore is an academic boarding prep school for boys and girls.

Artificial intelligence is taking over many jobs by automating tasks that were once done by humans. This is happening in a variety of industries, including manufacturing, customer service, healthcare, and transportation. In a surprising move, a preparatory school in the United Kingdom has named an AI robot as its “principal headteacher.” Cottesmore School, located in West Sussex, collaborated with an artificial intelligence developer to design Abigail Bailey, the robot, with the purpose of assisting the school’s headmaster.

Tom Rogerson, headmaster of Cottesmore, told The Telegraph that he is using the robot to give him advice on issues ranging from how to support fellow staff members to helping pupils with ADHD and writing school policies. The technology works in a similar way to ChatGPT, the online AI service where users type questions, and they are answered by the chatbot’s algorithms.

Mr Rogerson said the AI principal has been developed to have a wealth of knowledge in machine learning and educational management, with the ability to analyze vast amounts of data.

He told The Telegraph: “Sometimes having someone or something there to help you is a very calming influence.

“It’s nice to think that someone who is unbelievably well trained is there to help you make decisions.

“It doesn’t mean you don’t also seek counsel from humans. Of course you do. It’s just very calming and reassuring knowing that you don’t have to call anybody up, bother someone, or wait around for an answer.”

He added: “Being a school leader, a headmaster, is a very lonely job. Of course we have head teacher’s groups, but just having somebody or something on tap that can help you in this lonely place is very reassuring.”

The Cottesmore school charges fees up to almost 32000 pounds (Rs 32,48,121) a year for UK students.

The school, which has received accolades such as Tatler’s “Prep School of the Year,” is a boarding institution catering to boys and girls between the ages of four and 13.

Waiting for response to load…



Source link

]]>