Sunday, May 10, 2026
Washington DC
New York
Toronto
Distribution: (800) 510 9863
Press ID
  • Login
RH NEWSROOM National News and Press Releases. Local and Regional Perspectives. Media Advisories.
Yonkers Observer
  • Home
  • World
  • Politics
  • Finance
  • Technology
  • Health
  • Culture
  • Entertainment
  • Trend
No Result
View All Result
  • Home
  • World
  • Politics
  • Finance
  • Technology
  • Health
  • Culture
  • Entertainment
  • Trend
No Result
View All Result
Yonkers Observer
No Result
View All Result
Home Technology

Microsoft Considers More Limits for Its New A.I. Chatbot

by Yonkers Observer Report
February 16, 2023
in Technology
Share on FacebookShare on Twitter

Releasing it — despite potential imperfections — was a critical example of Microsoft’s “frantic pace” to incorporate generative A.I. into its products, he said. Executives at a news briefing on Microsoft’s campus in Redmond, Wash., repeatedly said it was time to get the tool out of the “lab” and into the hands of the public.

“I feel especially in the West, there is a lot more of like, ‘Oh, my God, what will happen because of this A.I.?’” Mr. Nadella said. “And it’s better to sort of really say, ‘Hey, look, is this actually helping you or not?’”

Oren Etzioni, professor emeritus at the University of Washington and founding chief executive of the Allen Institute for AI, a prominent lab in Seattle, said Microsoft “took a calculated risk, trying to control the technology as much as it can be controlled.”

He added that many of the most troubling cases involved pushing the technology beyond ordinary behavior. “It can be very surprising how crafty people are at eliciting inappropriate responses from chatbots,” he said. Referring to Microsoft officials, he continued, “I don’t think they expected how bad some of the responses would be when the chatbot was prompted in this way.”

To hedge against problems, Microsoft gave just a few thousand users access to the new Bing, though it said it planned to expand to millions more by the end of the month. To address concerns over accuracy, it provided hyperlinks and references in its answers so users could fact-check the results.

The caution was informed by the company’s experience nearly seven years ago when it introduced a chatbot named Tay. Users almost immediately found ways to make it spew racist, sexist and other offensive language. The company took Tay down within a day, never to release it again.

Much of the training on the new chatbot was focused on protecting against that kind of harmful response, or scenarios that invoked violence, such as planning an attack on a school.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

‘Will There Ever Be Another You’ review: Patricia Lockwood on COVID

8 months ago

The week's bestselling books, March 23

1 year ago

Jack Nicholson makes rare public appearance to cheer the Lakers to a Game 6 playoff win over the Grizzlies

3 years ago

Sudan Clinic Workers Killed in Zamzam Camp

1 year ago
Yonkers Observer

© 2025 Yonkers Observer or its affiliated companies.

Navigate Site

  • About
  • Advertise
  • Terms & Conditions
  • Privacy Policy
  • Disclaimer
  • Contact

Follow Us

No Result
View All Result
  • Home
  • World
  • Politics
  • Finance
  • Technology
  • Health
  • Culture
  • Entertainment
  • Trend

© 2025 Yonkers Observer or its affiliated companies.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In