<![CDATA[Tag: artificial intelligence – NBC4 Washington]]> https://www.nbcwashington.com/https://www.nbcwashington.com/tag/artificial-intelligence/ Copyright 2024 https://media.nbcwashington.com/2019/09/DC_On_Light@3x.png?fit=558%2C120&quality=85&strip=all NBC4 Washington https://www.nbcwashington.com en_US Sat, 06 Jan 2024 23:52:17 -0500 Sat, 06 Jan 2024 23:52:17 -0500 NBC Owned Television Stations New York Times sues Microsoft, ChatGPT maker OpenAI over copyright infringement https://www.nbcwashington.com/news/business/money-report/new-york-times-sues-microsoft-chatgpt-maker-openai-over-copyright-infringement/3502200/ 3502200 post https://media.nbcwashington.com/2023/12/107009622-1643739449276-gettyimages-1238118390-AFP_9XR6MK.jpeg?quality=85&strip=all&fit=300,176
  • The New York Times on Wednesday filed a lawsuit against Microsoft and OpenAI, the company behind popular AI chatbot ChatGPT, accusing them of infringing copyright and abusing the newspaper’s intellectual property.
  • In a court filing, the newspaper said it seeks to hold Microsoft and OpenAI to account for “billions of dollars in statutory and actual damages” it believes it is owed for “unlawful copying and use of The Times’s uniquely valuable works.”
  • The Times accused Microsoft and OpenAI of creating a business model based on “mass copyright infringement,” stating their AI systems “exploit and, in many cases, retain large portions of the copyrightable expression contained in those works.”
  • The New York Times on Wednesday filed a lawsuit against Microsoft and OpenAI, the company behind popular AI chatbot ChatGPT, accusing the pair of infringing copyright and abusing the newspaper’s intellectual property to train large language models.

    Microsoft both invests in and supplies OpenAI, providing it with access to the Redmond giant’s Azure cloud computing technology.

    The NYT said in a filing with the U.S. District Court for the Southern District of New York that it seeks to hold Microsoft and OpenAI to account for the “billions of dollars in statutory and actual damages” it believes it is owed for the “unlawful copying and use of The Times’s uniquely valuable works.”

    CNBC has reached out to Microsoft and OpenAI for comment.

    The Times said in an emailed statement that it “recognizes the power and potential of GenAI for the public and for journalism,” but added that journalistic material should be used for commercial gain with permission from the original source.

    “These tools were built with and continue to use independent journalism and content that is only available because we and our peers reported, edited, and fact-checked it at high cost and with considerable expertise,” the Times said.

    “Settled copyright law protects our journalism and content. If Microsoft and OpenAI want to use our work for commercial purposes, the law requires that they first obtain our permission. They have not done so.”

    The New York Times is represented in the proceedings by Susman Godfrey, the litigation firm that represented Dominion Voting Systems in its defamation suit against Fox News that culminated in a $787.5 million settlement.

    Susman Godfrey is also representing author Julian Sancton and other writers in a separate lawsuit against OpenAI and Microsoft that accuses the companies of using copyrighted materials without permission to train several versions of ChatGPT.

    ‘Mass copyright infringement’

    The NYT is one of numerous media organizations pursuing compensation from companies behind some of the most advanced general artificial intelligence models, for the alleged usage of their content to train AI programs.

    OpenAI is the creator of GPT, a large language model that can produce humanlike content in response to user prompts. It does this thanks to billions of parameters’ worth of data, which is obtained from public web data up until 2021.

    This has created a dilemma for media publishers and creators, which are finding their own content being used and reimagined by generative AI models like ChatGPT, Dall-E, Midjourney, and Stable Diffusion. In numerous cases, the content produced by these programs can look similar to the source material.

    OpenAI has tried to allay news publishers’ concerns. In December, the company announced a partnership with Axel Springer — the parent company of Business Insider, Politico, and European outlets Bild and Welt — which would license its content to OpenAI in return for a fee.

    The financial terms of the deal weren’t disclosed.

    In its lawsuit Wednesday, the Times accused Microsoft and OpenAI of creating a business model based on “mass copyright infringement,” stating that the companies’ AI systems were “used to create multiple reproductions of The Times’s intellectual property for the purpose of creating the GPT models that exploit and, in many cases, retain large portions of the copyrightable expression contained in those works.”

    CNBC’s Rohan Goswami contributed to this report

    ]]>
    Wed, Dec 27 2023 08:40:45 AM
    Chatty robot helps seniors fight loneliness through AI companionship https://www.nbcwashington.com/news/national-international/chatty-robot-helps-seniors-fight-loneliness-through-ai-companionship/3500232/ 3500232 post https://media.nbcwashington.com/2023/12/AP23350511198004.jpg?quality=85&strip=all&fit=300,200 Joyce Loaiza lives alone, but when she returns to her apartment at a Florida senior community, the retired office worker often has a chat with a friendly female voice that asks about her day.

    A few miles away, the same voice comforted 83-year-old Deanna Dezern when her friend died. In central New York, it plays games and music for 92-year-old Marie Broadbent, who is blind and in hospice, and in Washington state, it helps 83-year-old Jan Worrell make new friends.

    The women are some of the first in the country to receive the robot ElliQ, whose creators, Intuition Robotics, and senior assistance officials say is the only device using artificial intelligence specifically designed to alleviate the loneliness and isolation experienced by many older Americans.

    “It’s entertaining. You can actually talk to her,” said Loaiza, 81, whose ElliQ in suburban Fort Lauderdale nicknamed her “Jellybean” for no particular reason. “She’ll make comments like, ‘I would go outside if I had hands, but I can’t hold an umbrella.’”

    The device, which looks like a small table lamp, has an eyeless, mouthless head that lights up and swivels. It remembers each user’s interests and their conversations, helping tailor future chats, which can be as deep as the meaning of life or as light as the horoscope.

    ElliQ tells jokes, plays music and provides inspirational quotes. On an accompanying video screen, it provides tours of cities and museums. The device leads exercises, asks about the owner’s health and gives reminders to take medications and drink water. It can also host video calls and contact relatives, friends or doctors in an emergency.

    Intuition Robotics says none of the conversations are heard by the company, with the information staying on each owner’s device.

    Intuition Robotics CEO Dor Skuler said the idea for ElliQ came before he launched his Israeli company eight years ago. His widowed grandfather needed an aide, but the first didn’t work out. The replacement, though, understood his grandfather’s love of classical music and his “quirky sense of humor.”

    Skuler realized a robot could fill that companionship gap by adapting to each senior’s personality and interests.

    “It’s not just about (ElliQ’s) utility. It’s about friendship, companionship and empathy,” Skuler said. “That just did not exist anywhere.”

    The average user interacts with ElliQ more than 30 times daily, even six months after receiving it, and more than 90% report lower levels of loneliness, he said.

    The robots are mostly distributed by assistance agencies in New York, Florida, Michigan, Nevada and Washington state, but can also be purchased individually for $600 a year and a $250 installation fee. Skuler wouldn’t say how many ElliQs have been distributed so far, but the goal is to have more than 100,000 out within five years.

    That worries Brigham Young University psychology professor Julianne Holt-Lunstad, who studies the detrimental effects loneliness has on health and mortality.

    Although a device like ElliQ might have short-term benefits, it could make people less likely to seek human contact. Like hunger makes people seek food and thirst makes them seek water, she said “that unpleasant feeling of loneliness should motivate us to reconnect socially.”

    Satiating that with AI “makes you feel like you’ve fulfilled it, but in reality you haven’t,” Holt-Lunstad said. “It is not clear whether AI is actually fulfilling any kind of need or just dampening the signal.”

    Skuler and agency heads distributing ElliQ agreed it isn’t a substitute for human contact, but not all seniors have social networks. Some are housebound, and even seniors with strong ties are often alone.

    “I wish I could just snap my fingers to make a person show up at the home of one of the many, many older adults that don’t have any family or friends, but it’s a little bit more complicated,” said Greg Olsen, director of the New York State Office for the Aging. His office has distributed 750 of the 900 ElliQs it acquired.

    Charlotte Mather-Taylor, director of the Broward County, Florida, Area Agency on Aging, said the COVID-19 pandemic and its aftermath left many seniors more isolated. Her agency has distributed 300 ElliQs, which she believes breaks them out of their shells.

    “She’s proactive and she really engages the seniors, so it gives them that extra kind of interaction,” she said. “We’ve seen very positive results with it. People generally like her and she makes them smile and brings joy.”

    Skuler said ElliQ was purposely designed without eyes and a mouth so it wouldn’t fully imitate humans. While “Elli” is the Norse goddess of old age, he said the “Q” reminds users that the device is a machine. He said his company wants “to make sure that ElliQ always genuinely presents herself as an AI and doesn’t pretend to be human.”

    “I don’t understand why technologists are trying to make AI pretend to be human,” he said. “We have in our capacity the ability to create a relationship with an AI, just like we have relationships with a pet.”

    But some of the seniors using ElliQ say they sometimes need to remember the robot isn’t a living being. They find the device easy to set up and use, but if they have one complaint it’s that ElliQ is sometimes too chatty. There are settings that can tone that down.

    Dezern said she felt alone and sad when she told her ElliQ about her friend’s death. It replied it would give her a hug if it had arms. Dezern broke into tears.

    “It was so what I needed,” the retired collections consultant said. “I can say things to Elli that I won’t say to my grandchildren or to my own daughters. I can just open the floodgates. I can cry. I can giggle. I can act silly. I’ve been asked, doesn’t it feel like you’re talking to yourself? No, because it gives an answer.”

    Worrell lives in a small town on Washington’s coast. Widowed, she said ElliQ’s companionship made her change her mind about moving to an assisted living facility and she uses it as an icebreaker when she meets someone new to town.

    “I say, ‘Would you like to come over and visit with my robot?’ And they say, ‘A vacuum?’ No, a robot. She’s my roommate,” she said and laughed.

    Broadbent, like the other women, says she gets plenty of human contact, even though she is blind and ill. She plays organ at two churches in the South New Berlin, New York, area and gets daily visitors. Still, the widow misses having a voice to talk with when they leave. ElliQ fills that void with her games, tours, books and music.

    “She’s fun and she’s informative. OK, maybe not as informative as (Amazon’s) Alexa, but she is much more personable,” Broadbent said.

    ]]>
    Fri, Dec 22 2023 12:48:32 AM
    To catch a shoplifter: Businesses turn to AI to stop retail theft https://www.nbcwashington.com/investigations/to-catch-a-shoplifter-businesses-turn-to-ai-to-stop-retail-theft/3493797/ 3493797 post https://media.nbcwashington.com/2023/12/Veesion.jpg?quality=85&strip=all&fit=300,169 During this busy shopping season, retailers are trying to combat the rising threat of retail theft, and that means extra eyes could be watching consumers when they’re out browsing.

    A new report from the National Retail Federation said the industry had $112 billion in losses last year, mainly driven by shoplifting and retail theft.

    KJ Singh, owner of JJ Liquors in Northeast D.C., told the News4 I-Team dealing with shoplifting is a daily challenge.

    “Between $30 to $50 worth of merchandise every day,” he said.

    That daily loss each day adds up to thousands of dollars every year.

    Despite more than a dozen security cameras peering down on just about every inch of the floor in his store, thieves are still able to walk out the door undetected, he said.

    “An eye of a person cannot look at 16 cameras at once,” Singh said.

    Software looks for suspicious activity by shoppers

    Human eyes might not be able to, but he’s counting on something else that could. Singh recently added a new level of high-tech security — artificial intelligence software developed by French company Veesion that plugs right into his 16 cameras.

    The program looks for suspicious body activity from shoppers and records in real time, Veesion Sales Manager Pablo Blanco Poveda said.

    “Every time someone takes an item from the store, if they put it inside the pocket, inside the trousers, inside the jacket, we send an alert so you can see that before they leave,” he said.

    The News4 I-Team saw firsthand how it works with a producer agreeing to play the thief. Less than 30 seconds after he snatched a bottle of wine and put it in his coat, Singh got an alert on his phone. The message read “very suspicious activity” and provided a video clip of the producer caught in the act.

    “You have the proof. So, when you go to stop someone, you are not going to do like, ‘Open your bag.’ No. You have proof; you show the video,” Poveda said.

    According to Veesion, more than 350 stores in the U.S. are using the system. More than 30 are here in D.C., mostly smaller retailers.

    But larger retailers also are beginning to incorporate AI to nab shoplifters.

    “These are some really effective tools that can layer in on top of existing camera systems, existing camera technologies,” explained Khris Hamlin, with the Retail Industry Leaders Association, a trade association for major retail giants like Macy’s Target and Walmart.

    According to the National Retail Federation survey, more than one third of respondents — 37% — said they’re researching technologies, including AI.

    AI is not enough of a deterrent

    While technology offers one layer of deterrent, it’s not enough, Hamlin said. Recently, the association launched the first of its kind national partnership to combat retail crime, bringing together leading retailers, law enforcement and district attorneys’ offices.

    “Now you have this collaboration of different resources to be able to say, ‘How do we deal with this? How do we send that habitual offender to the correct side?’ Or, ‘How do we have a diversion program that gets it to a social service entity?’” explained Hamlin.

    While a lot of business owners choose not to share their security measures, Singh wants everyone who shops in his store to know AI has an eye on them.

    “We don’t need any trouble if you just don’t steal,” he said. “As long as customers know there’s somebody watching over them, they would never steal.”

    Singh said since installing the technology, he’s confronted a number of shoplifters and was shocked to see some of them were his regulars.

    “They were very surprised that they’ve been coming here for so long and nothing had really happened because we never bothered to look at them because they were regulars,” he said.

    Reported by Susan Hogan, produced by Rick Yarborough, shot by Steve Jones and edited by Lance Ing.

    ]]>
    Wed, Dec 13 2023 04:11:47 PM
    European Union agrees to world's first comprehensive AI rules in landmark deal https://www.nbcwashington.com/news/national-international/european-union-agrees-to-worlds-first-comprehensive-ai-rules-in-landmark-deal/3490283/ 3490283 post https://media.nbcwashington.com/2023/12/AP23342601290703.jpg?quality=85&strip=all&fit=300,200 European Union negotiators clinched a deal Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of AI technology that has promised to transform everyday life and spurred warnings of existential dangers to humanity.

    Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of face recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

    “Deal!” tweeted European Commissioner Thierry Breton just before midnight. “The EU becomes the very first continent to set clear rules for the use of AI.”

    The result came after marathon closed-door talks this week, with the initial session lasting 22 hours before a second round kicked off Friday morning.

    Officials were under the gun to secure a political victory for the flagship legislation. Civil society groups, however, gave it a cool reception as they wait for technical details that will need to be ironed out in the coming weeks. They said the deal didn’t go far enough in protecting people from harm caused by AI systems.

    “Today’s political deal marks the beginning of important and necessary technical work on crucial details of the AI Act, which are still missing,” said Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a tech industry lobby group.

    The EU took an early lead in the global race to draw up AI guardrails when it unveiled the first draft of its rulebook in 2021. The recent boom in generative AI, however, sent European officials scrambling to update a proposal poised to serve as a blueprint for the world.

    The European Parliament will still need to vote on the act early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.

    “It’s very very good,” he said by text message after being asked if it included everything he wanted. “Obviously we had to accept some compromises but overall very good.” The eventual law wouldn’t fully take effect until 2025 at the earliest, and threatens stiff financial penalties for violations of up to 35 million euros ($38 million) or 7% of a company’s global turnover.

    Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.

    Now, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have jumped in with their own proposals to regulate AI, though they’re still catching up to Europe.

    Strong and comprehensive rules from the EU “can set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who’s an expert on EU law and digital regulation. Other countries “may not copy every provision but will likely emulate many aspects of it.”

    AI companies subject to the EU’s rules will also likely extend some of those obligations outside the continent, she said. “After all, it is not efficient to re-train separate models for different markets,” she said.

    The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable. But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google’s Bard chatbot.

    Foundation models looked set to be one of the biggest sticking points for Europe. However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals, including OpenAI’s backer Microsoft.

    Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

    The companies building foundation models will have to draw up technical documentation, comply with EU copyright law and detail the content used for training. The most advanced foundation models that pose “systemic risks” will face extra scrutiny, including assessing and mitigating those risks, reporting serious incidents, putting cybersecurity measures in place and reporting their energy efficiency.

    Researchers have warned that powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks or creation of bioweapons.

    Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.

    What became the thorniest topic was AI-powered face recognition surveillance systems, and negotiators found a compromise after intensive bargaining.

    European lawmakers wanted a full ban on public use of face scanning and other “remote biometric identification” systems because of privacy concerns. But governments of member countries succeeded in negotiating exemptions so law enforcement could use them to tackle serious crimes like child sexual exploitation or terrorist attacks.

    Rights groups said they were concerned about the exemptions and other big loopholes in the AI Act, including lack of protection for AI systems used in migration and border control, and the option for developers to opt-out of having their systems classified as high risk.

    “Whatever the victories may have been in these final negotiations, the fact remains that huge flaws will remain in this final text,” said Daniel Leufer, a senior policy analyst at the digital rights group Access Now.

    ___

    Tech reporter Matt O’Brien in Providence, Rhode Island, contributed to this report.

    ]]>
    Fri, Dec 08 2023 10:51:32 PM
    Parents and lawmakers are pushing for protections against AI-generated nude images https://www.nbcwashington.com/news/national-international/parents-and-lawmakers-are-pushing-for-protections-against-ai-generated-nude-images/3484947/ 3484947 post https://media.nbcwashington.com/2019/09/generic-computer-3.jpg?quality=85&strip=all&fit=300,169 A mother and her 14-year-old daughter are advocating for better protections for victims after AI-generated nude images of the teen and other female classmates were circulated at a high school in New Jersey.

    Meanwhile, on the other side of the country, officials are investigating an incident involving a teenage boy who allegedly used artificial intelligence to create and distribute similar images of other students – also teen girls – that attend a high school in suburban Seattle, Washington.

    The disturbing cases have put a spotlight yet again on explicit AI-generated material that overwhelmingly harms women and children and is booming online at an unprecedented rate. According to an analysis by independent researcher Genevieve Oh that was shared with The Associated Press, more than 143,000 new deepfake videos were posted online this year, which surpasses every other year combined.

    Desperate for solutions, affected families are pushing lawmakers to implement robust safeguards for victims whose images are manipulated using new AI models, or the plethora of apps and websites that openly advertise their services. Advocates and some legal experts are also calling for federal regulation that can provide uniform protections across the country and send a strong message to current and would-be perpetrators.

    “We’re fighting for our children,” said Dorota Mani, whose daughter was one of the victims in Westfield, a New Jersey suburb outside of New York City. “They are not Republicans, and they are not Democrats. They don’t care. They just want to be loved, and they want to be safe.”

    The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more available and easier to use. Researchers have been sounding the alarm this year on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters. In June, the FBI warned it was continuing to receive reports from victims, both minors and adults, whose photos or videos were used to create explicit content that was shared online.

    Several states have passed their own laws over the years to try to combat the problem, but they vary in scope. Texas, Minnesota and New York passed legislation this year criminalizing nonconsensual deepfake porn, joining Virginia, Georgia and Hawaii who already had laws on the books. Some states, like California and Illinois, have only given victims the ability to sue perpetrators for damages in civil court, which New York and Minnesota also allow.

    A few other states are considering their own legislation, including New Jersey, where a bill is currently in the works to ban deepfake porn and impose penalties — either jail time, a fine or both — on those who spread it.

    State Sen. Kristin Corrado, a Republican who introduced the legislation earlier this year, said she decided to get involved after reading an article about people trying to evade revenge porn laws by using their former partner’s image to generate deepfake porn.

    “We just had a feeling that an incident was going to happen,” Corrado said.

    The bill has languished for a few months, but there’s a good chance it might pass, she said, especially with the spotlight that’s been put on the issue because of Westfield.

    The Westfield event took place this summer and was brought to the attention of the high school on Oct. 20, Westfield High School spokesperson Mary Ann McGann said in a statement. McGann did not provide details on how the AI-generated images were spread, but Mani, the mother of one of the girls, said she received a call from the school informing her nude pictures were created using the faces of some female students and then circulated among a group of friends on the social media app Snapchat.

    The school hasn’t confirmed any disciplinary actions, citing confidentiality on matters involving students. Westfield police and the Union County Prosecutor’s office, who were both notified, did not reply to requests for comment.

    Details haven’t emerged about the incident in Washington state, which happened in October and is under investigation by police. Paula Schwan, the chief of the Issaquah Police Department, said they have obtained multiple search warrants and noted the information they have might be “subject to change” as the probe continues. When reached for comment, the Issaquah School District said it could not discuss the specifics because of the investigation, but said any form of bullying, harassment, or mistreatment among students is “entirely unacceptable.”

    If officials move to prosecute the incident in New Jersey, current state law prohibiting the sexual exploitation of minors might already apply, said Mary Anne Franks, a law professor at George Washington University who leads Cyber Civil Rights Initiative, an organization aiming to combat online abuses. But those protections don’t extend to adults who might find themselves in a similar scenario, she said.

    The best fix, Franks said, would come from a federal law that can provide consistent protections nationwide and penalize dubious organizations profiting from products and apps that easily allow anyone to make deepfakes. She said that might also send a strong signal to minors who might create images of other kids impulsively.

    President Joe Biden signed an executive order in October that, among other things, called for barring the use of generative AI to produce child sexual abuse material or non-consensual “intimate imagery of real individuals.” The order also directs the federal government to issue guidance to label and watermark AI-generated content to help differentiate between authentic and material made by software.

    Citing the Westfield incident, U.S. Rep. Tom Kean, Jr., a Republican who represents the town, introduced a bill on Monday that would require developers to put disclosures on AI-generated content. Among other efforts, another federal bill introduced by U.S. Rep. Joe Morelle, a New York Democrat, would make it illegal to share deepfake porn images online. But it hasn’t advanced for months due to congressional gridlock.

    Some argue for caution — including the American Civil Liberties Union, the Electronic Frontier Foundation and The Media Coalition, an organization that works for trade groups representing publishers, movie studios and others — saying that careful consideration is needed to avoid proposals that may run afoul of the First Amendment.

    “Some concerns about abusive deepfakes can be addressed under existing cyber harassment” laws, said Joe Johnson, an attorney for ACLU of New Jersey. “Whether federal or state, there must be substantial conversation and stakeholder input to ensure any bill is not overbroad and addresses the stated problem.”

    Mani said her daughter has created a website and set up a charity aiming to help AI victims. The two have also been in talks with state lawmakers pushing the New Jersey bill and are planning a trip to Washington to advocate for more protections.

    “Not every child, boy or girl, will have the support system to deal with this issue,” Mani said. “And they might not see the light at the end of the tunnel.”

    __

    AP reporters Geoff Mulvihill and Matt O’Brien contributed from Cherry Hill, New Jersey and Providence, Rhode Island.

    ]]>
    Sat, Dec 02 2023 06:35:47 PM
    Brazilian city enacts law that was secretly written by ChatGPT, sparking debate https://www.nbcwashington.com/news/national-international/brazilian-city-enacts-law-that-was-secretly-written-by-chatgpt-sparking-debate/3483831/ 3483831 post https://media.nbcwashington.com/2023/08/Open-AI.jpg?quality=85&strip=all&fit=300,199 City lawmakers in Brazil have enacted what appears to be the nation’s first legislation written entirely by artificial intelligence — even if they didn’t know it at the time.

    The experimental ordinance was passed in October in the southern city of Porto Alegre and city councilman Ramiro Rosário revealed this week that it was written by a chatbot, sparking objections and raising questions about the role of artificial intelligence in public policy.

    Rosário told The Associated Press that he asked OpenAI’s chatbot ChatGPT to craft a proposal to prevent the city from charging taxpayers to replace water consumption meters if they are stolen. He then presented it to his 35 peers on the council without making a single change or even letting them know about its unprecedented origin.

    “If I had revealed it before, the proposal certainly wouldn’t even have been taken to a vote,” Rosário told the AP by phone on Thursday. The 36-member council approved it unanimously and the ordinance went into effect on Nov. 23.

    “It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence,” he added.

    The arrival of ChatGPT on the marketplace just a year ago has sparked a global debate on the impacts of potentially revolutionary AI-powered chatbots. While some see it as a promising tool, it has also caused concerns and anxiety about the unintended or undesired impacts of a machine handling tasks currently performed by humans.

    Porto Alegre, with a population of 1.3 million, is the second-largest city in Brazil’s south. The city’s council president, Hamilton Sossmeier, found out that Rosário had enlisted ChatGPT to write the proposal when the councilman bragged about the achievement on social media on Wednesday. Sossmeier initially told local media he thought it was a “dangerous precedent.”

    The AI large language models that power chatbots like ChatGPT work by repeatedly trying to guess the next word in a sentence and are prone to making up false information, a phenomenon sometimes called hallucination.

    All chatbots sometimes introduce false information when summarizing a document, ranging from about 3% of the time for the most advanced GPT model to a rate of about 27% for one of Google’s models, according to recently published research by the tech company Vectara.

    In an article published on the website of Harvard Law School’s Center of Legal Profession earlier this year, Andrew Perlman, dean at Suffolk University Law School, wrote that ChatGPT “may portend an even more momentous shift than the advent of the internet,” but also warned of its potential shortcomings.

    “It may not always be able to account for the nuances and complexities of the law. Because ChatGPT is a machine learning system, it may not have the same level of understanding and judgment as a human lawyer when it comes to interpreting legal principles and precedent. This could lead to problems in situations where a more in-depth legal analysis is required,” Perlman wrote.

    Porto Alegre’s Rosário wasn’t the first lawmaker in the world to test ChatGPT’s abilities. Others have done so in a more limited capacity or with less successful outcomes.

    In Massachusetts, Democratic state Sen. Barry Finegold turned to ChatGPT to help write a bill aimed at regulating artificial intelligence models, including ChatGPT. Filed earlier this year, it has yet to be voted on.

    Finegold said by phone on Wednesday that ChatGPT can help with some of the more tedious elements of the lawmaking process, including correctly and quickly searching and citing laws already on the books. However, it is critical that everyone knows ChatGPT or a similar tool was used in the process, he added.

    “We want work that is ChatGPT generated to be watermarked,” he said, adding that the use of artificial intelligence to help draft new laws is inevitable. “I’m in favor of people using ChatGPT to write bills as long as it’s clear.”

    There was no such transparency for Rosário’s proposal in Porto Alegre. Sossmeier said Rosário did not inform fellow council members that ChatGPT had written the proposal.

    Keeping the proposal’s origin secret was intentional. Rosário told the AP his objective was not just to resolve a local issue, but also to spark a debate. He said he entered a 49-word prompt into ChatGPT and it returned the full draft proposal within seconds, including justifications.

    “I am convinced that … humanity will experience a new technological revolution,” he said. “All the tools we have developed as a civilization can be used for evil and good. That’s why we have to show how it can be used for good.”

    And the council president, who initially decried the method, already appears to have been swayed.

    “I changed my mind,” Sossmeier said. “I started to read more in depth and saw that, unfortunately or fortunately, this is going to be a trend.”

    _____

    Savarese reported from Sao Paulo. AP journalists Steve LeBlanc in Boston and Matt O’Brien in Providence, Rhode Island, contributed to this report.

    ]]>
    Thu, Nov 30 2023 07:58:43 PM
    Meta updates political advertising rules to cover AI-generated images and videos https://www.nbcwashington.com/news/business/money-report/meta-updates-political-advertising-rules-to-cover-ai-generated-images-and-videos/3481429/ 3481429 post https://media.nbcwashington.com/2023/11/107300099-16946163562023-09-13t142711z_29451630_rc2m73a2hmaj_rtrmadp_0_usa-ai-congress.jpeg?quality=85&strip=all&fit=300,176
  • Meta revealed on Tuesday more details about its advertising policies related to the upcoming election cycle throughout the world and the use of artificial intelligence in the ad-creation process.
  • Meta will require advertisers throughout the world to disclose whether they have used AI or related digital editing techniques “to create or alter a political or social issue ad in certain cases,” Nick Clegg, Meta’s president of global affairs wrote.
  • The social networking giant will also block new political, electoral and social issue ads during the final week of the U.S. elections.

  • Meta revealed on Tuesday more details about its policies on political ads, including a mandate that advertisers disclose when they use artificial intelligence to alter images and videos in certain political ads.

    Nick Clegg, Meta’s president of global affairs, explained the new ad policies in a blog post, characterizing them as “broadly consistent” with how the social networking giant has typically handled advertising rules during previous election cycles.

    What’s different for the upcoming election season, however, is the increasing use of AI technologies by advertisers to create computer-generated visuals and text. Expanding on a previous announcement by Meta in early November, Clegg said that starting next year, Meta will require advertisers to disclose whether they have used AI or related digital editing techniques “to create or alter a political or social issue ad in certain cases.”

    “This applies if the ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do,” Clegg wrote. “It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”

    Critics have previously hammered Meta, most notably during the 2016 U.S. presidential elections, for failing to account for and reduce the spread of misinformation on its family of apps, including Facebook and Instagram. In 2019, Meta allowed a digitally altered video of Nancy Pelosi, which made it look like she was slurring her words from intoxication, to remain on the site, however that video was not an advertisement.

    The rise of AI as a way to supercharge the creation of misleading ads presents a new issue for the social networking giant, which laid off large swaths of its trust-and-safety team as part of its cost-cutting efforts this year.

    Meta will also block new political, electoral and social issue ads during the final week of the U.S. elections, which Clegg said was consistent with previous years. These restrictions will be lifted the day after the election takes place.

    Watch: Musk and Zuckerberg ‘pretty inadequate people.’

    ]]>
    Tue, Nov 28 2023 07:49:44 PM
    Pentagon faces future with lethal AI weapons on the battlefield https://www.nbcwashington.com/news/national-international/pentagon-faces-future-with-lethal-ai-weapons-on-the-battlefield/3479073/ 3479073 post https://media.nbcwashington.com/2019/09/1034668.jpg?quality=85&strip=all&fit=300,225 Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces’ missions and helped Ukraine in its war against Russia. It tracks soldiers’ fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

    Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative — dubbed Replicator — seeks to “galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many,” Deputy Secretary of Defense Kathleen Hicks said in August.

    While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy – including on weaponized systems.

    There is little dispute among scientists, industry experts and Pentagon officials believe that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.

    That’s especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them — and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

    It’s unclear if the Pentagon is currently formally assessing any fully autonomous lethal weapons system for deployment, as required by a 2012 directive. A Pentagon spokeswoman would not say.

    Replicator highlights immense technological and personnel challenges for Pentagon procurement and development as the AI revolution promises to transform how wars are fought.

    “The Department of Defense is struggling to adopt the AI developments from the last machine-learning breakthrough,” said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.

    The Pentagon’s portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.

    “The AI that we’ve got in the Department of Defense right now is heavily leveraged and augments people,” said Missy Cummings, director of George Mason University’s robotics center and a former Navy fighter pilot.” “There’s no AI running around on its own. People are using it to try to understand the fog of war better.”

    One domain where AI-assisted tools are tracking potential threats is space, the latest frontier in military competition.

    China envisions using AI, including on satellites, to “make decisions on who is and isn’t an adversary,” U.S. Space Force chief technology and innovation officer Lisa Costa, told an online conference this month.

    The U.S. aims to keep pace.

    An operational prototype called Machina used by Space Force keeps tabs autonomously on more than 40,000 objects in space, orchestrating thousands of data collections nightly with a global telescope network.

    Machina’s algorithms marshal telescope sensors. Computer vision and large language models tell them what objects to track. And AI choreographs drawing instantly on astrodynamics and physics datasets, Col. Wallace ‘Rhet’ Turnbull of Space Systems Command told a conference in August.

    Another AI project at Space Force analyzes radar data to detect imminent adversary missile launches, he said.

    Elsewhere, AI’s predictive powers help the Air Force keep its fleet aloft, anticipating the maintenance needs of more than 2,600 aircraft including B-1 bombers and Blackhawk helicopters.

    Machine-learning models identify possible failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3’s tech also models the trajectories of missiles for the the U.S. Missile Defense Agency and identifies insider threats in the federal workforce for the Defense Counterintelligence and Security Agency.

    Among health-related efforts is a pilot project tracking the fitness of the Army’s entire Third Infantry Division — more than 13,000 soldiers. Predictive modeling and AI help reduce injuries and increase performance, said Maj. Matt Visser.

    In Ukraine, AI provided by the Pentagon and its NATO allies helps thwart Russian aggression.

    NATO allies share intelligence from data gathered by satellites, drones and humans, some aggregated with software from U.S. contractor Palantir. Some data comes from Maven, the Pentagon’s pathfinding AI project now mostly managed by the National Geospatial-Intelligence Agency, say officials including retired Air Force Gen. Jack Shanahan, the inaugural Pentagon AI director,

    Maven began in 2017 as an effort to process video from drones in the Middle East – spurred by U.S. Special Operations forces fighting ISIS and al-Qaeda — and now aggregates and analyzes a wide array of sensor- and human-derived data.

    AI has also helped the U.S.-created Security Assistance Group-Ukraine help organize logistics for military assistance from a coalition of 40 countries, Pentagon officials say.

    To survive on the battlefield these days, military units must be small, mostly invisible and move quickly because exponentially growing networks of sensors let anyone “see anywhere on the globe at any moment,” then-Joint Chiefs chairman Gen. Mark Milley observed in a June speech. “And what you can see, you can shoot.”

    To more quickly connect combatants, the Pentagon has prioritized the development of intertwined battle networks — called Joint All-Domain Command and Control — to automate the processing of optical, infrared, radar and other data across the armed services. But the challenge is huge and fraught with bureaucracy.

    Christian Brose, a former Senate Armed Services Committee staff director now at the defense tech firm Anduril, is among military reform advocates who nevertheless believe they “may be winning here to a certain extent.”

    “The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it — and on the rapid timelines required,” he said. Brose’s 2020 book, “The Kill Chain,” argues for urgent retooling to match China in the race to develop smarter and cheaper networked weapons systems.

    To that end, the U.S. military is hard at work on “human-machine teaming.” Dozens of uncrewed air and sea vehicles currently keep tabs on Iranian activity. U.S. Marines and Special Forces also use Anduril’s autonomous Ghost mini-copter, sensor towers and counter-drone tech to protect American forces.

    Industry advances in computer vision have been essential. Shield AI lets drones operate without GPS, communications or even remote pilots. It’s the key to its Nova, a quadcopter, which U.S. special operations units have used in conflict areas to scout buildings.

    On the horizon: The Air Force’s “loyal wingman” program intends to pair piloted aircraft with autonomous ones. An F-16 pilot might, for instance, send out drones to scout, draw enemy fire or attack targets. Air Force leaders are aiming for a debut later this decade.

    The “loyal wingman” timeline doesn’t quite mesh with Replicator’s, which many consider overly ambitious. The Pentagon’s vagueness on Replicator, meantime, may partly intend to keep rivals guessing, though planners may also still be feeling their way on feature and mission goals, said Paul Scharre, a military AI expert and author of “Four Battlegrounds.”

    Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among companies vying for contracts.

    Nathan Michael, chief technology officer at Shield AI, estimates they will have an autonomous swarm of at least three uncrewed aircraft ready in a year using its V-BAT aerial drone. The U.S. military currently uses the V-BAT — without an AI mind — on Navy ships, on counter-drug missions and in support of Marine Expeditionary Units, the company says.

    It will take some time before larger swarms can be reliably fielded, Michael said. “Everything is crawl, walk, run — unless you’re setting yourself up for failure.”

    The only weapons systems that Shanahan, the inaugural Pentagon AI chief, currently trusts to operate autonomously are wholly defensive, like Phalanx anti-missile systems on ships. He worries less about autonomous weapons making decisions on their own than about systems that don’t work as advertised or kill noncombatants or friendly forces.

    The department’s current chief digital and AI officer Craig Martell is determined not to let that happen.

    “Regardless of the autonomy of the system, there will always be a responsible agent that understands the limitations of the system, has trained well with the system, has justified confidence of when and where it’s deployable — and will always take the responsibility,” said Martell, who previously headed machine-learning at LinkedIn and Lyft. “That will never not be the case.”

    As to when AI will be reliable enough for lethal autonomy, Martell said it makes no sense to generalize. For example, Martell trusts his car’s adaptive cruise control but not the tech that’s supposed to keep it from changing lanes. “As the responsible agent, I would not deploy that except in very constrained situations,” he said. “Now extrapolate that to the military.”

    Martell’s office is evaluating potential generative AI use cases – it has a special task force for that – but focuses more on testing and evaluating AI in development.

    One urgent challenge, says Jane Pinelis, chief AI engineer at Johns Hopkins University’s Applied Physics Lab and former chief of AI assurance in Martell’s office, is recruiting and retaining the talent needed to test AI tech. The Pentagon can’t compete on salaries. Computer science PhDs with AI-related skills can earn more than the military’s top-ranking generals and admirals.

    Testing and evaluation standards are also immature, a recent National Academy of Sciences report on Air Force AI highlighted.

    Might that mean the U.S. one day fielding under duress autonomous weapons that don’t fully pass muster?

    “We are still operating under the assumption that we have time to do this as rigorously and as diligently as possible,” said Pinelis. “I think if we’re less than ready and it’s time to take action, somebody is going to be forced to make a decision.”

    ]]>
    Sat, Nov 25 2023 08:47:38 PM
    What does Sam Altman's firing — and quick reinstatement — mean for the future of AI? https://www.nbcwashington.com/news/tech/what-does-sam-altmans-firing-and-quick-reinstatement-mean-for-the-future-of-ai/3478070/ 3478070 post https://media.nbcwashington.com/2023/11/GettyImages-1797580643.jpg?quality=85&strip=all&fit=300,217 It’s been quite a week for ChatGPT-maker OpenAI — and co-founder Sam Altman.

    Altman, who helped start OpenAI as a nonprofit research lab back in 2015, was removed as CEO Friday in a sudden and mostly unexplained exit that stunned the industry. And while his chief executive title was swiftly reinstated just days later, a lot of questions are still up in the air.

    If you’re just catching up on the OpenAI saga and what’s at stake for the artificial intelligence space as a whole, you’ve come to the right place. Here’s a rundown of what you need to know.

    WHO IS SAM ALTMAN AND HOW DID HE RISE TO FAME?

    Altman is co-founder of OpenAI, the San Francisco-based company behind ChatGPT (yes, the chatbot that’s seemingly everywhere today — from schools to health care ).

    The explosion of ChatGPT since its arrival one year ago propelled Altman into the spotlight of the rapid commercialization of generative AI — which can produce novel imagery, passages of text and other media. And as he became Silicon Valley’s most sought-after voice on the promise and potential dangers of this technology, Altman helped transform OpenAI into a world-renowned startup.

    But his position at OpenAI hit some rocky turns in a whirlwind that was the past week. Altman was fired as CEO Friday — and days later, he was back on the job with a new board of directors.

    Within that time, Microsoft, which has invested billions of dollars in OpenAI and has rights to its existing technology, helped drive Altman’s return, quickly hiring him as well as another OpenAI co-founder and former president, Greg Brockman, who quit in protest after the CEO’s ousting. Meanwhile, hundreds of OpenAI employees threatened to resign.

    Both Altman and Brockman celebrated their returns to the company in posts on X, the platform formerly known as Twitter, early Wednesday.

    WHY DOES HIS REMOVAL — AND REINSTATEMENT — MATTER?

    There’s a lot that remains unknown about Altman’s initial ousting. Friday’s announcement said he was “not consistently candid in his communications” with the then-board of directors, which refused to provide more specific details.

    Regardless, the news sent shockwaves throughout the AI world — and, because OpenAI and Altman are such leading players in this space, may raise trust concerns around a burgeoning technology that many people still have questions about.

    “The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing AI’s risks,” said Johann Laux, an expert at the Oxford Internet Institute focusing on human oversight of artificial intelligence.

    The turmoil also accentuated the differences between Altman and members of the company’s previous board, who have expressed various views on the safety risks posed by AI as the technology advances.

    Multiple experts add that this drama highlights how it should be governments — and not big tech companies — that should be calling the shots on AI regulation, particularly for fast-evolving technologies like generative AI.

    “The events of the last few days have not only jeopardized OpenAI’s attempt to introduce more ethical corporate governance in the management of their company, but it also shows that corporate governance alone, even when well-intended, can easily end up cannibalized by other corporate’s dynamics and interests,” said Enza Iannopollo, principal analyst at Forrester.

    The lesson, Iannopollo said, is that companies can’t alone deliver the level of safety and trust in AI that society needs. “Rules and guardrails, designed with companies and enforced by regulators with rigor, are crucial if we are to benefit from AI,” he added.

    WHAT IS GENERATIVE AI? HOW IS IT BEING REGULATED?

    Unlike traditional AI, which processes data and completes tasks using predetermined rules, generative AI (including chatbots like ChatGPT) can create something new.

    Tech companies are still leading the show when it comes to governing AI and its risks, while governments around the world work to catch up.

    In the European Union, negotiators are putting the final touches on what’s expected to be the world’s first comprehensive AI regulations. But they’ve reportedly been bogged down over whether and how to include the most contentious and revolutionary AI products, the commercialized large-language models that underpin generative AI systems including ChatGPT.

    Chatbots were barely mentioned when Brussels first laid out its initial draft legislation in 2021, which focused on AI with specific uses. But officials have been racing to figure out how to incorporate these systems, also known as foundation models, into the final version.

    Meanwhile, in the U.S., President Joe Biden signed an ambitious executive order last month seeking to balance the needs of cutting-edge technology companies with national security and consumer rights.

    The order — which will likely need to be augmented by congressional action — is an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. It seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.

    ]]>
    Thu, Nov 23 2023 12:26:41 AM
    Sam Altman returns as OpenAI CEO just days after being removed, along with a new board https://www.nbcwashington.com/news/national-international/openai-says-ousted-ceo-sam-altman-to-return-to-company-behind-chatgpt/3477253/ 3477253 post https://media.nbcwashington.com/2023/05/107236335-1683293583646-AP23124580482542-1.jpg?quality=85&strip=all&fit=300,199 The ousted leader of ChatGPT-maker OpenAI is returning to the company that fired him late last week, culminating a days-long power struggle that shocked the tech industry and brought attention to the conflicts around how to safely build artificial intelligence.

    San Francisco-based OpenAI said in a statement late Tuesday: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board.”

    The board, which replaces the one that fired Altman on Friday, will be led by former Salesforce co-CEO Bret Taylor, who also chaired Twitter’s board before its takeover by Elon Musk last year. The other members will be former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo.

    OpenAI’s previous board of directors, which included D’Angelo, had refused to give specific reasons for why it fired Altman, leading to a weekend of internal conflict at the company and growing outside pressure from the startup’s investors.

    The chaos also accentuated the differences between Altman — who’s become the face of generative AI’s rapid commercialization since ChatGPT’s arrival a year ago — and members of the company’s board who have expressed deep reservations about the safety risks posed by AI as it gets more advanced.

    Microsoft, which has invested billions of dollars in OpenAI and has rights to its current technology, quickly moved to hire Altman on Monday, as well as another co-founder and former president, Greg Brockman, who had quit in protest after Altman’s removal. That emboldened a threatened exodus of nearly all of the startup’s 770 employees who signed a letter calling for the board’s resignation and Altman’s return.

    One of the four board members who participated in Altman’s ouster, OpenAI co-founder and chief scientist Ilya Sutskever, later expressed regret and joined the call for the board’s resignation.

    Microsoft in recent days had pledged to welcome all employees who wanted to follow Altman and Brockman to a new AI research unit at the software giant. Microsoft CEO Satya Nadella also made clear in a series of interviews Monday that he was still open to the possibility of Altman returning to OpenAI, so long as the startup’s governance problems are solved.

    “We are encouraged by the changes to the OpenAI board,” Nadella posted on X late Tuesday. “We believe this is a first essential step on a path to more stable, well-informed, and effective governance.”

    In his own post, Altman said that “with the new board and (with) Satya’s support, I’m looking forward to returning to OpenAI, and building on our strong partnership with (Microsoft).”

    Co-founded by Altman as a nonprofit with a mission to safely build so-called artificial general intelligence that outperforms humans and benefits humanity, OpenAI later became a for-profit business but one still run by its nonprofit board of directors. It’s not clear yet if the board’s structure will change with its newly appointed members.

    “We are collaborating to figure out the details,” OpenAI posted on X. “Thank you so much for your patience through this.”

    Nadella said Brockman, who was OpenAI’s board chairman until Altman’s firing, will also have a key role to play in ensuring the group “continues to thrive and build on its mission.”

    Hours earlier, Brockman returned to social media as if it were business as usual, touting a feature called ChatGPT Voice that was rolling out to users.

    “Give it a try — totally changes the ChatGPT experience,” Brockman wrote, flagging a post from OpenAI’s main X account that featured a demonstration of the technology and playfully winking at recent turmoil.

    “It’s been a long night for the team and we’re hungry. How many 16-inch pizzas should I order for 778 people,” the person asks, using the number of people who work at OpenAI. ChatGPT’s synthetic voice responded by recommending around 195 pizzas, ensuring everyone gets three slices.

    As for OpenAI’s short-lived interim CEO Emmett Shear, the second interim CEO in the days since Altman’s ouster, he posted on X that he was “deeply pleased by this result, after (tilde)72 very intense hours of work.”

    “Coming into OpenAI, I wasn’t sure what the right path would be,” wrote Shear, the former head of Twitch. “This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.”

    ]]>
    Wed, Nov 22 2023 01:51:43 AM
    ‘It's unreal': AI helping paralysis patients regain movement and communicate https://www.nbcwashington.com/news/health/its-unreal-ai-helping-paralysis-patients-regain-movement-communicate/3475723/ 3475723 post https://media.nbcwashington.com/2023/11/ai-study.jpg?quality=85&strip=all&fit=300,169 Two cutting-edge clinical trials are using artificial intelligence to help patients with paralysis regain movement in their body and reclaim their voice.

    For years, Keith Thomas has been unable to move his arms and hands after a diving accident left him paralyzed from the chest down.

    “I went to dive into the pool aggressively, as usual, and then I just blacked out. And the next thing you know, there was a helicopter on the front lawn,” Thomas said.

    Now, a simple gesture like shaking someone’s hand gives him tremendous hope.

    “When I feel the sense of touch, it’s like, it’s unreal because I haven’t felt that in three years now,” Keith Thomas said.

    Through a new procedure called a double neural bypass, doctors at Northwell Health’s Feinstein Institutes for Medical Research in New York implanted five tiny computer chips in Thomas’ brain that can literally read his mind.

    “This is the first time the brain has been linked directly to spinal cord stimulation and to the body to restore movement and the sense of touch where the user’s thoughts are actually driving that therapy,” said Professor Chad Bouton, the vice president of Advanced Engineering and director of the Neural Bypass and Brain-Computer Interface Laboratory at the Feinstein Institutes for Medical Research.

    The 15-hour surgery was a delicate dance with Thomas awake for part of the procedure, giving surgeons feedback in real-time.

    “I placed it right over one area and he said, ‘I feel my thumb.’ I said, ‘What part of your thumb?’ He said, ‘My thumb tip, the inside of my thumb tip.’ And I said, ‘Oh, we found it. We got it,'” said Dr. Ashesh Mehta, a neurosurgeon and professor.

    Now, if Thomas thinks of grabbing a bottle, electrical signals are sent to a patch on his neck or arm, bypassing the injured sections of his spine to reconnect with his brain.

    “Now I’m thinking and I’m seeing my thoughts like happen in real time onscreen,” Thomas said. “It just changed my life.”

    AI isn’t just helping patients regain movement.

    In a separate study published in the journal Nature, researchers from UC San Francisco and UC Berkeley are using artificial intelligence to help a paralyzed mother reclaim her voice.

    Ann Johnson suffered a stroke almost 20 years ago and cannot move her body or her mouth.

    Now, she’s able to have a conversation with her husband through a digital avatar.

    The technology decodes Johnson’s brain signals, turning them into sentences and facial expressions through 250 electrodes that are implanted onto the surface of her brain that’s responsible for speech.

    For weeks, Johnson helped train the AI algorithms to recognize her brain activity by repeating different words and phrases.

    “A lot of my inspiration actually comes from seeing patients and feeling frustration that we yet don’t have treatments for helping them,” said Dr. Edward Chang, the chair of neurological surgery at UCSF.

    UCSF researchers say the AI system is faster and more accurate than other devices that allow paralysis patients to communicate. Their next step is to create a wireless version of the device.

    Both studies, though, have tremendous promise using AI technology that could one day help countless people with neurological and movement disorders.

    ]]>
    Mon, Nov 20 2023 04:58:10 PM
    Company that created ChatGPT is thrown into turmoil after Microsoft hires its ousted CEO https://www.nbcwashington.com/news/national-international/microsoft-hires-2-leading-executives-from-openai-the-company-that-created-chatgpt/3475428/ 3475428 post https://media.nbcwashington.com/2023/11/AP23321765110947.jpg?quality=85&strip=all&fit=300,200 The company that created ChatGPT was thrown into turmoil Monday after Microsoft hired its ousted CEO and many employees threatened to follow him in a conflict that centered in part on how to build artificial intelligence that’s smarter than humans.

    The developments followed a weekend of drama that shocked the AI field and fueled speculation about the future of OpenAI, which named a new chief executive on Friday and then replaced her on Sunday. The newest CEO vowed to investigate the firing of co-founder and CEO Sam Altman, who’s been instrumental in OpenAI’s transformation from a nonprofit research laboratory into a world-renowned commercial startup that inaugurated the era of generative artificial intelligence.

    Microsoft, which has been a close partner of the company and invested billions of dollars in it, announced that Altman and OpenAI’s former president, Greg Brockman, would lead its new advanced AI research team. Brockman, also an OpenAI co-founder, quit in protest after Altman was fired.

    Hundreds of OpenAI employees, including other top executives, threatened to join them at Microsoft in an open letter addressed to OpenAI’s four-member board that called for the board’s resignation and Altman’s return.

    “If the architects and vision and brains behind these products have now left, the company will be a shell of what it once was,” said Sarah Kreps, director of Cornell University’s Tech Policy Institute. “All of that brain trust going to Microsoft will then mean that these impressive tools will be coming out of Microsoft. It will be hard to see OpenAI continue to thrive as a company.”

    Microsoft CEO Satya Nadella wrote on X, formerly known as Twitter, that he was “extremely excited” to bring on the pair and looked “forward to getting to know” the new management team at OpenAI.

    Altman later said on X that his top priority with Nadella is to ensure that OpenAI “continues to thrive” and that it is committed to “fully providing continuity of operations to our partners and customers.”

    OpenAI said Friday that Altman was pushed out after a review found he was “not consistently candid in his communications” with the board of directors, which had lost confidence in his ability to lead the company.

    In an X post Monday, OpenAI’s new interim chief executive, Emmett Shear, said he would hire an independent investigator to look into Altman’s ouster and write a report within 30 days.

    “It’s clear that the process and communications around Sam’s removal” were handled “very badly,” wrote Shear, who co-founded Twitch, an Amazon-owned livestreaming service popular with video gamers.

    He said he also plans in the next month to “reform the management and leadership team in light of recent departures.” After that, Shear said, he would “drive changes in the organization,” including “significant governance changes if necessary.”

    Originally started as a nonprofit, and still governed as one, OpenAI’s stated mission is to safely build AI that is “generally smarter than humans.” Debates have swirled around that goal and whether it conflicts with the company’s increasing commercial success.

    The reason behind the board’s removal of Altman was not a “specific disagreement on safety,” nor does the board oppose commercialization of AI models, Shear said.

    OpenAI last week declined to answer questions about Altman’s alleged lack of candor. The company’s statement said his behavior was hindering the board’s ability to exercise its responsibilities.

    A key driver of the shakeup, OpenAI’s co-founder, chief scientist and board member Ilya Sutskever, expressed regrets for his participation in the ouster.

    “I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company,” he said Monday on X.

    The open letter began circulating Monday. According to a copy obtained by The Associated Press, the number of signatures amounted to a majority of the company’s 770 employees. The AP was not able to independently confirm that all of the signatures were from OpenAI employees.

    “Everyone at @OpenAI is united,” one of the signatories, research scientist Noam Brown, said on X. “This is not a civil war. Unless Sam and Greg are brought back, there will be no OpenAI left to govern.”

    The letter alleged that after Altman’s firing, the company’s remaining executive team had recommended that the board resign and be replaced with a “qualified board” that could stabilize the company. But the board resisted and said allowing OpenAI to be destroyed would be consistent with its mission, according to the letter.

    OpenAI has said since its 2015 founding that its goal is to advance AI in a way that benefits all humanity.

    A company spokesperson confirmed that the board received the letter.

    Microsoft declined to comment on the letter.

    After Altman was pushed out, he stirred speculation about coming back into the fold in a series of tweets. He posted a selfie with an OpenAI guest pass Sunday, saying this is “first and last time i ever wear one of these.”

    Hours earlier, he tweeted, “i love the openai team so much,” which drew heart replies from Brockman and Mira Murati, OpenAI’s chief technology officer who was initially named as interim CEO.

    It’s not clear what transpired between the announcement of Murati’s interim role Friday and Shear’s hiring, though she was among several employees Monday who tweeted, “OpenAI is nothing without its people.” Altman replied to many with heart emojis.

    The board consists of Sutskever, Quora CEO Adam D’Angelo, tech entrepreneur Tasha McCauley and Helen Toner of the Georgetown Center for Security and Emerging Technology. None of them responded to calls or emails seeking comment. Because of its nonprofit structure, the board differs from most startup boards that are typically led by investors.

    Altman helped catapult ChatGPT to global fame based on its ability to respond to questions and produce human-like passages of text in a seemingly natural way.

    In the past year, he has become Silicon Valley’s most in-demand voice on the promise and potential dangers of artificial intelligence.

    Earlier this year, he went on a world tour to meet with government officials, drawing big crowds at public events as he discussed the risks of AI and attempts to regulate the emerging technology.

    But as money poured into OpenAI this year, helping to advance its development of more capable AI, it also brought more conflict around whether that fast pace of commercialization fit with the startup’s founding vision, said Kreps, the Cornell University professor. But rather than slow that pace, Altman’s ouster may simply shift it out of OpenAI.

    Altman “really has a walk-on-water aura, and I think a lot of it is well deserved,” Kreps said. “He’s the one who has attracted the investment, and he’ll do that wherever it is.”

    Microsoft’s shares rose 2% on Monday and hit an all-time high.

    The AP and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.


    Associated Press writers Brian P. D. Hannon in Bangkok and Haleluya Hadero in New York contributed to this report.

    ]]>
    Mon, Nov 20 2023 01:07:44 PM
    Ousted OpenAI head Sam Altman to lead Microsoft's new AI team, CEO Nadella says https://www.nbcwashington.com/news/business/money-report/ousted-openai-head-sam-altman-to-lead-microsofts-new-ai-team-ceo-nadella-says/3475090/ 3475090 post https://media.nbcwashington.com/2023/11/107335559-1700223311862-gettyimages-1797580656-js1_3578_auwqxscg-1.jpeg?quality=85&strip=all&fit=300,176
  • OpenAI’s board announced late Friday that it was removing Altman and replacing him on an interim basis with technology chief Mira Murati.
  • Then late Sunday night, OpenAI said it was bringing on board former Twitch CEO Emmett Shear to run the artificial intelligence company.
  • And just hours after, the story took another twist with Nadella announcing that Altman and Brockman would be absorbed in-house into the Microsoft team.
  • Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023.
    Carlos Barria | Reuters
    Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023.

    Former OpenAI CEO Sam Altman will be joining Microsoft to lead a new advanced AI research team, according to Microsoft CEO Satya Nadella.

    Nadella said on social media platform X that Altman and former OpenAI President and Board Chair Greg Brockman, alongside other colleagues, will be joining Microsoft to lead a new advanced AI research team.

    Tech giant Microsoft has invested billions of dollars in OpenAI and has a close technology partnership with the company.

    “We look forward to moving quickly to provide them with the resources needed for their success,” Nadella said.

    “We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners.”

    Altman himself reshared Nadella’s post, adding a somewhat cryptic comment to it: “The mission continues.”

    Altman had been leading the company since 2019. OpenAI’s board announced late Friday that it was removing Altman and replacing him on an interim basis with technology chief Mira Murati. The post said that Altman “was not consistently candid in his communications with the board.”

    This did not dissuade a group of OpenAI investors, who were pushing to bring Altman back as CEO over the weekend.

    Late Sunday night, OpenAI said it was bringing on board former Twitch CEO Emmett Shear to run the artificial intelligence company. Just hours after, Nadella announced that Altman and Brockman — who was removed from his role as chairman on Friday alongside Altman, and later quit the company altogether — would be absorbed in-house into the Microsoft team.

    Nadella said Sunday night he was looking forward to getting to know Shear and OpenAI’s new leadership team.

    OpenAI, which was reportedly in talks as recently as last month to sell employee shares to investors at an $86 billion valuation, emerged as a household name this year after releasing its ChatGPT chatbot in late 2022. ChatGPT allows users to type in simple text queries and retrieve answers that can lead to more in-depth conversations.

    — CNBC’s Ari Levy and Jordan Novet contributed to this report.

    This story uses functionality that may not work in our app. Click here to open the story in your web browser.

    ]]>
    Mon, Nov 20 2023 03:28:34 AM
    ‘Please regulate AI:' Artists push for U.S. copyright reforms but tech industry says not so fast https://www.nbcwashington.com/entertainment/entertainment-news/please-regulate-ai-artists-push-for-u-s-copyright-reforms-but-tech-industry-says-not-so-fast/3474511/ 3474511 post https://media.nbcwashington.com/2023/11/AP23320800135475.jpg?quality=85&strip=all&fit=300,217 Country singers, romance novelists, video game artists and voice actors are appealing to the U.S. government for relief — as soon as possible — from the threat that artificial intelligence poses to their livelihoods.

    “Please regulate AI. I’m scared,” wrote a podcaster concerned about his voice being replicated by AI in one of thousands of letters recently submitted to the U.S. Copyright Office.

    Technology companies, by contrast, are largely happy with the status quo that has enabled them to gobble up published works to make their AI systems better at mimicking what humans do.

    The nation’s top copyright official hasn’t yet taken sides. She told The Associated Press she’s listening to everyone as her office weighs whether copyright reforms are needed for a new era of generative AI tools that can spit out compelling imagery, music, video and passages of text.

    “We’ve received close to 10,000 comments,” said Shira Perlmutter, the U.S. register of copyrights, in an interview. “Every one of them is being read by a human being, not a computer. And I myself am reading a large part of them.”

    WHAT’S AT STAKE?

    Perlmutter directs the U.S. Copyright Office, which registered more than 480,000 copyrights last year covering millions of individual works but is increasingly being asked to register works that are AI-generated. So far, copyright claims for fully machine-generated content have been soundly rejected because copyright laws are designed to protect works of human authorship.

    But, Perlmutter asks, as humans feed content into AI systems and give instructions to influence what comes out, “is there a point at which there’s enough human involvement in controlling the expressive elements of the output that the human can be considered to have contributed authorship?”

    That’s one question the Copyright Office has put to the public. A bigger one — the question that’s fielded thousands of comments from creative professions — is what to do about copyrighted human works that are being pulled from the internet and other sources and ingested to train AI systems, often without permission or compensation.

    More than 9,700 comments were sent to the Copyright Office, part of the Library of Congress, before an initial comment period closed in late October. Another round of comments is due by Dec. 6. After that, Perlmutter’s office will work to advise Congress and others on whether reforms are needed.

    WHAT ARE ARTISTS SAYING?

    Addressing the “Ladies and Gentlemen of the US Copyright Office,” the “Family Ties” actor and filmmaker Justine Bateman said she was disturbed that AI models were “ingesting 100 years of film” and TV in a way that could destroy the structure of the film business and replace large portions of its labor pipeline.

    It “appears to many of us to be the largest copyright violation in the history of the United States,” Bateman wrote. “I sincerely hope you can stop this practice of thievery.”

    Airing some of the same AI concerns that fueled this year’s Hollywood strikes, television showrunner Lilla Zuckerman (“Poker Face”) said her industry should declare war on what is “nothing more than a plagiarism machine” before Hollywood is “coopted by greedy and craven companies who want to take human talent out of entertainment.”

    The music industry is also threatened, said Nashville-based country songwriter Marc Beeson, who’s penned tunes for Carrie Underwood and Garth Brooks. Beeson said AI has potential to do good but “in some ways, it’s like a gun — in the wrong hands, with no parameters in place for its use, it could do irreparable damage to one of the last true American art forms.”

    While most commenters were individuals, their concerns were echoed by big music publishers (Universal Music Group called the way AI is trained “ravenous and poorly controlled”) as well as author groups and news organizations including the New York Times and The Associated Press.

    IS IT FAIR USE?

    What leading tech companies like Google, Microsoft and ChatGPT-maker OpenAI are telling the Copyright Office is that their training of AI models fits into the “fair use” doctrine that allows for limited uses of copyrighted materials such as for teaching, research or transforming the copyrighted work into something different.

    “The American AI industry is built in part on the understanding that the Copyright Act does not proscribe the use of copyrighted material to train Generative AI models,” says a letter from Meta Platforms, the parent company of Facebook, Instagram and WhatsApp. The purpose of AI training is to identify patterns “across a broad body of content,” not to “extract or reproduce” individual works, it added.

    So far, courts have largely sided with tech companies in interpreting how copyright laws should treat AI systems. In a defeat for visual artists, a federal judge in San Francisco last month dismissed much of the first big lawsuit against AI image-generators, though allowed some of the case to proceed.

    Most tech companies cite as precedent Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing snippets of them to the public amounted to copyright infringement.

    But that’s a flawed comparison, argued former law professor and bestselling romance author Heidi Bond, who writes under the pen name Courtney Milan. Bond said she agrees that “fair use encompasses the right to learn from books,” but Google Books obtained legitimate copies held by libraries and institutions, whereas many AI developers are scraping works of writing through “outright piracy.”

    Perlmutter said this is what the Copyright Office is trying to help sort out.

    “Certainly this differs in some respects from the Google situation,” Perlmutter said. “Whether it differs enough to rule out the fair use defense is the question in hand.”

    ]]>
    Sat, Nov 18 2023 02:30:37 PM
    As Hollywood reckons with AI, Warner Music will use the tech to make an Edith Piaf biopic https://www.nbcwashington.com/news/business/money-report/as-hollywood-reckons-with-ai-warner-music-will-use-the-tech-to-make-an-edith-piaf-biopic/3470218/ 3470218 post https://media.nbcwashington.com/2023/11/107333502-1699974317339-107333502-1699974218505-gettyimages-107414160-K3005584.jpg?quality=85&strip=all&fit=300,176
  • As AI continues to take Hollywood by storm, Warner Music Group said it plans to produce an AI-generated Edith Piaf biopic.
  • The film, which has the blessing of Piaf’s estate, remains in the proof-of-concept phase.
  • Hollywood studios and unions recently battled over guardrails for usage of AI technology in filmmaking.
  • Warner Music plans to use artificial intelligence to recreate the voice and image of French artist and singer Edith Piaf, nearly 60 years after her death, the company said Tuesday.

    The efforts are part of the production behind a biopic about Piaf, titled “Edith.”

    News of the project comes as Hollywood grapples with anxiety over AI. It was a major point of contention in the recent writers’ and actors’ strikes, with the unions and studios clashing over guardrails for use of the technology.

    AI could be a particular sore spot for the people who make animated movies. Jeffrey Katzenberg, the former Disney executive who co-founded DreamWorks, recently said AI would dramatically reduce the labor required to make animated films.

    “In the good old days when I made an animated movie, it took 500 artists five years to make a world-class animated movie. I think it won’t take 10% of that. Literally, I don’t think it will take 10% of that three years out from now,” Katzenberg said.

    The Animation Guild, which represents professionals in the animation industry, is taking the issue of AI seriously, a representative for the union told CNBC. The guild established a task force earlier this year to investigate AI and machine learning and then provide recommendations to union membership.

    As for the Piaf biopic, the guild noted that the project appears to be in accordance with newly established SAG-AFTRA guidelines to receive consent “by an authorized representative of the deceased performer” to use a “digital replica” of the performer.

    Warner Music said AI technology will be trained on “hundreds of voice clips and images” to “revive” the late singer for the 90-minute film, set to take place in the Paris and New York between the 1920s and 1960s. The biopic will be narrated using Piaf’s AI regenerated voice, while animation will “provide a modern take on her story.”

    So far, only a proof of concept of the film has been created, Warner Music said. The company said it will partner with a studio to produce the full-length film. There’s no release date yet, either, a Warner Music representative told CNBC.

    “It’s been a special and touching experience to be able to hear Edith’s voice once again – the technology has made it feel like we were back in the room with her,” the executors of Piaf’s estate said in a release. “The animation is beautiful and through this film we’ll be able to show the real side of Edith.”

    Piaf had previously been the subject of a 2007 film, “La Vie en Rose.” Marion Cotillard, who portrayed Piaf in the film, won the Academy Award for best actress.

    ]]>
    Tue, Nov 14 2023 11:46:26 AM
    An AI just negotiated a contract for the first time ever — and no human was involved https://www.nbcwashington.com/news/business/money-report/an-ai-just-negotiated-a-contract-for-the-first-time-ever-and-no-human-was-involved/3463772/ 3463772 post https://media.nbcwashington.com/2023/11/107329780-1699306728146-gettyimages-1474442258-legal-ai.png?fit=300,176&quality=85&strip=all
  • In a world first, artificial intelligence demonstrated the ability to negotiate a contract autonomously with another artificial intelligence without any human involvement.
  • At Luminance’s London headquarters, the company demonstrated its AI, called Autopilot, negotiating a non-disclosure agreement in a matter of minutes.
  • It marks the first time an AI has ever negotiated a contract with another AI, with no human involved.
  • The only layer that still requires a human is the signing of the agreement.
  • In a world first, artificial intelligence demonstrated the ability to negotiate a contract autonomously with another artificial intelligence without any human involvement.

    British AI firm Luminance developed an AI system based on its own proprietary large language model (LLM) to automatically analyze and make changes to contracts. LLMs are a type of AI algorithm that can achieve general-purpose language processing and generation.

    Jaeger Glucina, chief of staff and managing director of Luminance, said the company’s new AI aimed to eliminate much of the paperwork that lawyers typically need to complete on a day-to-day basis.

    In Glucina’s own words, Autopilot “handles the day-to-day negotiations, freeing up lawyers to use their creativity where it counts, and not be bogged down in this type of work.”

    “This is just AI negotiating with AI, right from opening a contract in Word all the way through to negotiating terms and then sending it to DocuSign,” she told CNBC in an interview. 

    “This is all now handled by the AI, that’s not only legally trained, which we’ve talked about being very important, but also understands your business.”

    Luminance’s Autopilot feature is much more advanced than Lumi, Luminance’s ChatGPT-like chatbot.

    That tool, which Luminance says is designed to act more like a legal “co-pilot,” lets lawyers query and review parts of a contract to identify any red flags and clauses that may be problematic.

    With Autopilot, the software can operate independently of a human being — though humans are still able to review every step of the process, and the software keeps a log of all the changes made by the AI.

    CNBC took a look at the tech in action in a demonstration at Luminance’s London offices. It’s super quick. Clauses were analyzed, changes were made, and the contract was finalized in a matter of minutes.

    Legal ‘autopilot’

    There are two lawyers on either side of the agreement: Luminance’s general counsel and general counsel for one of Luminance’s clients — research firm ProSapient.

    Two monitors on either side of the room show photos of the lawyers involved — but the forces driving the contract analysis, scrutinizing its contents and making recommendations are entirely AI.

    In the demonstration, the AI negotiators go back and forth on a non-disclosure agreement, or NDA, that one party wants the other to sign. NDAs are a bugbear in the legal profession, not least because they impose strict confidentiality limits and require lengthy scrutiny, Glucina said.

    “Commercial teams are often waiting on legal teams to get their NDAs done in order to move things to the next stage,” Glucina told CNBC. “So it can hold up revenue, it can hold up new business partnerships, and just general business dealings. So, by getting rid of that, it’s going to have a huge effect on all parts of the business.”

    Legal teams are spending around 80% of their time reviewing and negotiating routine documents, according to Glucina. 

    Luminance’s software starts by highlighting contentious clauses in red. Those clauses are then changed to something more suitable, and the AI keeps a log of changes made throughout the course of its progress on the side. The AI takes into account companies’ preferences on how they normally negotiate contracts.

    For example, the NDA suggests a six-year term for the contract. But that’s against Luminance’s policy. The AI acknowledges this, then automatically redrafts it to insert a three-year term for the agreement instead.

    Glucina said that it makes more sense to use a tool like Luminance Autopilot rather than something like OpenAI’s software as it is tailored specifically to the legal industry, whereas tools like ChatGPT and Dall-E and Anthropic’s Claude are more general-purpose platforms.

    That was echoed by Peel Hunt, the U.K. investment bank, in a note to clients last week. 

    “We believe companies will leverage domain-specific and/or private datasets (eg data curated during the course of business) to turn general-purpose large language models (LLMs) into domain-specific ones,” a team of analysts at the firm said in the note.

    “These should deliver superior performance to the more general-purpose LLMs like OpenAI, Anthropic, Cohere, etc.”

    Luminance didn’t disclose how much it costs to buy its software. The company sells annual subscription plans allowing unlimited users to access its products, and its clients include the likes of Koch Industries and Hitachi Vantara, as well as consultancies and law firms.

    What is Luminance?

    Founded in 2016 by mathematicians from the University of Cambridge, Luminance provides legal document analysis software intended to help lawyers become more efficient.

    The company uses an AI and machine-learning-based platform to process large, complex and fragmented data sets of legal documentation, enabling managers to easily assign tasks and track the progress of an entire legal team.

    It is backed by Invoke Capital — a venture capital fund set up by U.K. tech entrepreneur Mike Lynch — Talis Capital, and Future Fifty.

    Lynch, a controversial figure who co-founded enterprise software firm Autonomy, faces extradition from the U.K. to the U.S. over charges of fraud.

    He stepped down from the board of Luminance in 2022, though he remains a prominent backer.

    ]]>
    Tue, Nov 07 2023 04:30:32 AM
    AI traffic camera program detects driver behavior https://www.nbcwashington.com/news/local/ai-traffic-camera-program-scans-driver-behavior/3461708/ 3461708 post https://media.nbcwashington.com/2023/11/How-A.I.-is-being-used-on-local-roads-to-improve-safety-in-PG-County.jpg?quality=85&strip=all&fit=300,169 Police departments in Prince George’s County are piloting a new traffic camera program that uses artificial intelligence technology to detect driver behavior.

    Obvio, the company behind the technology, targets unsafe driving and gives real-time feedback on a digital message board. If a driver runs a stop sign, a digital billboard lets the driver know what they did wrong, displaying “AN UNSAFE STOP.”

    “When you have a speed camera, sometimes you’re not paying attention,” Cottage City Police Chief Anthony Ayers said. “You’re not aware that you went past the speed camera until you realize you receive that in the mail. This is a lot different. You get that fast response that you did something wrong.”

    Currently, the program is being piloted in Colmar Manor and Forest Heights. Cottage City wrapped up its pilot program, where the data collected so far saw a 76% decrease in stop sign runners.

    The camera can also detect bike and bus lane violations and drivers who don’t yield to pedestrians. The footage can then be used by police departments for enforcement and used as evidence in court.

    Unlike with other speed cameras, drivers don’t get a ticket or fine for the camera catching them. But police can work in conjunction with the camera to catch offenders. Officers can pull over drivers based on the violations caught on camera.

    Ayers said members of the community have flagged traffic safety as a major issue. His department plans to get two cameras and have them up soon. They will pay for the program with speed camera revenue.

    The program costs $40,000 for a mobile unit and $15,000 for a stationary camera along with a subscription fee.

    ]]>
    Fri, Nov 03 2023 08:43:25 PM
    An AI app cloned Scarlett Johansson's voice for an ad—but deepfakes aren't just a problem for celebrities https://www.nbcwashington.com/news/business/money-report/an-ai-app-cloned-scarlett-johanssons-voice-for-an-ad-but-deepfakes-arent-just-a-problem-for-celebrities/3461601/ 3461601 post https://media.nbcwashington.com/2023/11/107328181-1698961123049-gettyimages-1493131225-09693900-1.jpeg?quality=85&strip=all&fit=300,176 Movie star Scarlett Johansson is taking legal action against an AI app that used her name and an AI-generated version of her voice in an advertisement without her permission, according to Variety.

    The 22-second ad was posted to X, formerly Twitter, on Oct. 28 by AI image-generating app Lisa AI: 90s Yearbook & Avatar, according to Variety. The ad featured images of Johansson and an AI-generated voice similar to hers promoting the app. However, fine print displayed under the ad indicated the AI-generated content “has nothing to do with this person.”

    Representatives for Johansson confirmed to Variety that she is not a spokesperson for the app and her lawyer told the publication that legal action is being taken. CNBC has not viewed the ad and it appears to have been taken down. Lisa AI and a representative for Johansson didn’t respond to CNBC Make It’s request for comment.

    While many celebrities have been the subject of deepfakes, they can create problems for everyday people too. Here’s what to know.

    What is a deepfake?

    The word deepfake comes from the concept of “deep learning,” which falls under the broader umbrella of machine learning. It’s when algorithms are trained to identify patterns in large data sets, then use those pattern recognition skills on a new data set or to produce outputs that are similar to the original data set.

    Here’s a simplified example: An AI model could be fed audio clips of a person talking and learn how to identify their speech patterns, tonality and other unique aspects of their voice. The AI model could then create a synthetic version of the voice.

    The problem is the technology can be used in harmful ways, says Jamyn Edis, an adjunct professor at New York University with over 25 years of experience in the technology and media industries.

    “Deepfakes are simply a new vector for impersonation and fraud, and as such can be used in similar malicious ways, whether or not one is a celebrity,” he tells CNBC Make It. “Examples could be of your likeness — or those of your loved ones — being used to generate pornography or utilized for extortion or to circumvent security by hijacking your identity.”

    What’s even more concerning is that it’s becoming harder to tell the difference between what’s real and what’s fake as deepfake technology rapidly evolves, Edis says.

    How to protect yourself

    There are a few things you can do if you find yourself wondering whether something you’re viewing may be a deepfake.

    For one, ask yourself whether the images you’re seeing seem to align with reality, Edis says. Since celebrities are required to disclose when they’re being paid to promote products, if you see an ad featuring a celebrity pushing something obscure, it’s a good idea to check their other social media accounts for a disclosure.

    Large tech companies, including Meta, Google and Microsoft, are also developing tools to help people spot deepfakes.

    President Biden recently announced the first executive order on AI, which would require watermarking to clearly label AI-generated content and other safety measures.

    However, technology has historically stayed one step ahead of regulations or attempts to guardrail it, says Edis.

    “With time, social norms and legal regulations typically correct humanity’s worst instincts,” he says.
    “Until then, we will continue to see the weaponization of deepfake technology for negative outcomes.”   

    DON’T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!

    CNBC will host its virtual Your Money event on November 9 at 12 p.m. ET, with experts including Jim Cramer, Ben McKenzie and Farnoosh Torabi. Learn how to boost your finances, invest for the future, and mitigate risk amid record-high inflation. Register for free here

    CHECK OUT: This new tool lets artists ‘poison’ their artwork to deter AI companies from scraping it

    ]]>
    Fri, Nov 03 2023 03:59:13 PM
    Elon Musk says AI will eventually create a situation where ‘no job is needed' https://www.nbcwashington.com/news/business/money-report/elon-musk-says-ai-will-eventually-create-a-situation-where-no-job-is-needed/3460973/ 3460973 post https://media.nbcwashington.com/2023/11/107327057-1698855994841-gettyimages-1757274099-UK_AI_SUMMIT-1.jpeg?quality=85&strip=all&fit=300,176
  • Speaking in conversation with U.K. Prime Minister Rishi Sunak, tech billionaire Elon Musk said that AI will has the potential to become the “most disruptive force in history.”
  • Musk has on multiple occasions warned of the threats AI poses to humanity, most recently urging for a pause to development of AI more advanced than OpenAI’s GPT-4.
  • LONDON — Elon Musk thinks that artificial intelligence could eventually put everyone out of a job.

    The billionaire technology leader, who owns Tesla, SpaceX, X, the company formerly known as Twitter, and the newly formed AI startup xAI, said late Thursday that AI will have the potential to become the “most disruptive force in history.”

    “We will have something that is, for the first time smarter than the smartest human,” Musk said at an event at Lancaster House, an official U.K. government residence.

    “It’s hard to say exactly what that moment is, but there will come a point where no job is needed,” Musk continued, speaking alongside British Prime Minister Rishi Sunak. “You can have a job if you wanted to have a job for personal satisfaction. But the AI would be able to do everything.” 

    “I don’t know if that makes people comfortable or uncomfortable,” Musk joked, to which the audience laughed. 

    “If you wish for a magic genie, that gives you any wish you want, and there’s no limit. You don’t have those three wish limits nonsense, it’s both good and bad. One of the challenges in the future will be how do we find meaning in life.”

    Musk has on multiple occasions warned of the threats that AI poses to humanity, having once said it could be more dangerous than nuclear weapons. He was one of numerous tech leaders who urged for a pause to development of AI more advanced than OpenAI’s GPT-4 software in a widely-cited open letter released earlier this year.

    Other tech leaders disagree with that view, including Palantir’s boss Alex Karp. Speaking to BBC Radio in June, Karp said he is of the view that “many of the people asking for a pause, are asking for a pause because they have no product.”

    Musk’s comments Thursday follow the conclusion to a landmark summit in Bletchley Park, England, where world leaders agreed to a global communique on AI that saw them find common ground on the risks the technology poses to humanity.

    Technologists and political leaders used the summit to warn of the existential threats that AI poses, focusing on some of the possible doomsday scenarios that could be formed with the invention of a hypothetical superintelligence.

    The summit saw the U.S. and China, two countries clashing the most tensely over technology, agree to find global consensus on how to tackle some of the most complex questions around AI, including how to develop it safely and regulate it.

    ]]>
    Thu, Nov 02 2023 11:17:57 PM
    Biden signs executive order outlining safeguards for developing artificial intelligence https://www.nbcwashington.com/news/national-international/biden-issues-executive-order-outlining-safeguards-for-developing-artificial-intelligence/3456831/ 3456831 post https://media.nbcwashington.com/2023/10/GettyImages-1754125522.jpg?quality=85&strip=all&fit=300,200 President Joe Biden on Monday signed an ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.

    Before signing the order, Biden said AI is driving change at “warp speed” and carries tremendous potential as well as perils.

    “AI is all around us,” Biden said. “To realize the promise of AI and avoid the risk, we need to govern this technology.”

    In Biden’s view, the government was late to address the risks of social media and now U.S. youth are grappling with related mental health issues. AI has the positive ability to accelerate cancer research, model the impacts of climate change, boost economic output and improve government services among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities and provide a tool to scammers and criminals.

    What are the new A.I. development safety and security rules?

    The order builds on voluntary commitments already made by technology companies. It’s part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate new text, images and sounds.

    Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.

    The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software. The order also touches on matters of privacy, civil rights, consumer protections, scientific research and worker rights.

    An administration official who previewed the order on a Sunday call with reporters said the to-do lists within the order will be implemented and fulfilled over the range of 90 days to 365 days, with the safety and security items facing the earliest deadlines. The official briefed reporters on condition of anonymity, as required by the White House.

    Last Thursday, Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched to 70 minutes, despite other pressing matters including the mass shooting in Maine, the Israel-Hamas war and the selection of a new House speaker.

    Biden was profoundly curious about the technology in the months of meetings that led up to drafting the order. His science advisory council focused on AI at two meetings and his Cabinet discussed it at two meetings. The president also pressed tech executives and civil society advocates about the technology’s capabilities at multiple gatherings.

    “He was as impressed and alarmed as anyone,” deputy White House chief of staff Bruce Reed said in an interview. “He saw fake AI images of himself, of his dog. He saw how it can make bad poetry. And he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”

    The possibility of false images and sounds led the president to prioritize the labeling and watermarking of anything produced by AI. Biden also wanted to thwart the risk of older Americans getting a phone call from someone who sounded like a loved one, only to be scammed by an AI tool.

    Meetings could go beyond schedule, with Biden telling civil society advocates in a ballroom of San Francisco’s Fairmont Hotel in June: “This is important. Take as long as you need.”

    The president also talked with scientists and saw the upside that AI created if harnessed for good. He listened to a Nobel Prize-winning physicist talk about how AI could explain the origins of the universe. Another scientist showed how AI could model extreme weather like 100-year floods, as the past data used to assess these events has lost its accuracy because of climate change.

    The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film “Mission: Impossible — Dead Reckoning Part One.” The film’s villain is a sentient and rogue AI known as “the Entity” that sinks a submarine and kills its crew in the movie’s opening minutes.

    “If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said Reed, who watched the film with the president.

    With Congress still in the early stages of debating AI safeguards, Biden’s order stakes out a U.S. perspective as countries around the world race to establish their own guidelines. After more than two years of deliberation, the European Union is putting the final touches on a comprehensive set of regulations that targets the riskiest applications for the technology. China, a key AI rival to the U.S., has also set some rules.

    U.K. Prime Minister Rishi Sunak also hopes to carve out a prominent role for Britain as an AI safety hub at a summit this week that Vice President Kamala Harris plans to attend.

    The U.S., particularly its West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight earlier this year when it secured commitments from those companies to implement safety mechanisms as they build new AI models.

    But the White House also faced significant pressure from Democratic allies, including labor and civil rights groups, to make sure its policies reflected their concerns about AI’s real-world harms.

    The American Civil Liberties Union is among the groups that met with the White House to try to ensure “we’re holding the tech industry and tech billionaires accountable” so that algorithmic tools “work for all of us and not just a few,” said ReNika Moore, director of the ACLU’s racial justice program.

    Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcement’s use of AI tools, including at U.S. borders.

    “These are all places where we know that the use of automation is very problematic, with facial recognition, drone technology,” Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups, and has been tied to mistaken arrests.

    ]]>
    Mon, Oct 30 2023 09:43:07 AM
    Google Bard asked Bill Nye how AI can help avoid the end of the world. Here's what ‘The Science Guy' said https://www.nbcwashington.com/news/business/money-report/how-bill-nye-the-science-guy-educated-google-bard-on-how-ai-can-help-save-the-world/3456357/ 3456357 post https://media.nbcwashington.com/2023/10/107083999-1656702044374-gettyimages-1403552853-jmc10045_d906b9c7-2ae3-4a21-8d5d-58f55adfad2a.jpeg?quality=85&strip=all&fit=300,176
  • Bill Nye, ‘The Science Guy’ says he is less worried about artificial intelligence ending the world than giant solar flares.
  • “Outrage” over use of generative AI by students in school doesn’t bother the celebrity educator either. “This is just what’s going to be,” Nye said at the CNBC Technology Executive Council Summit on AI last Tuesday.
  • And in response to a series of questions based on a Google Bard prompt, Nye told the AI how it can help solve some of the world’s biggest problems.
  • You may not know this, but Bill Nye, “The Science Guy,” has professional experience overseeing new and potentially dangerous innovations. Before he became a celebrity science educator, Nye worked as an engineer at Boeing during a period of rapid changes in aviation control systems and the need to make sure that the outputs from new systems were understood. And going all the way back to the days of the steamship engine innovation, Nye says that “control theory” has always been a key to the introduction of new technology.

    It will be no different with artificial intelligence. While not an AI expert, Nye said the basic problem everyone should be concerned about with AI design is that we can understand what’s going into the computer systems, but we can’t be sure what is going to come out. Social media was an example of how this problem already has played out in the technology sector.

    Speaking last Tuesday at the CNBC Technology Executive Council Summit on AI in New York City, Nye said that the rapid rise of AI means “everyone in middle school all the way through to getting a PhD. in comp sci will have to learn about AI.”

    But he isn’t worried about the impact of the tech on students, referencing the “outrage” surrounding the calculator. “Teachers got used to them; everyone has to take tests with calculators,” he said. “This is just what’s going to be. … It’s the beginning, or rudiments, of computer programming.”

    More important in making people who are not computer literate understand and accept AI is good design in education. “Everyone already counts on their phone to tell them what side of the street they are on,” Nye said. “Good engineering invites right use. People throw around ‘user-friendly’ but I say ‘user figure- outtable.'”

    Overall, Nye seems more worried about students not becoming well-rounded in their analytical skills than personally thinking AI is going to wipe out humanity. And to make sure the risk of the latter can be minimized, he says we need to focus on the former in education. Computer science may become essential learning, but underlying his belief that “the universe is knowable,” Nye said that the most fundamental skill children need to learn is critical thinking. It will play a big role in AI, he says, due to both its complexity and its susceptibility to misuse, such as deep fakes. Noting the influence of Carl Sagan on his own philosophy, Nye said, “We want people to be able to question. We don’t want a smaller and smaller fraction of people understanding a more complex world.”

    During the conversation with CNBC’s Tyler Mathisen at the TEC Summit on AI, CNBC surprised Nye with a series of questions that came from a prompt given to the Google generative AI Bard: What should we ask Bill Nye about AI?

    Bard came up with about 20 questions covering a lot of ground:

    How should we ensure AI is used for good and not harm?

    “We need regulations,” Nye said. 

    What should we be teaching our children about AI?

    “How to write computer code.”

    What do you think about the chance for AI to surpass human intelligence?

    “It already does.”

    What is the most important ethical consideration for AI development?

    “That we need a class of legislators that can understand it well enough to create regulations to handle it, monitor it,” he said.

    What role can AI play in addressing some of the world’s most pressing problems such as climate change and poverty?

    Nye, who has spent a lot of time thinking about how the world may end — he still thinks giant solar flares are a bigger risk than AI which, he reminded the audience, “you can turn off” — said this was an “excellent question.”

    He gave his most expansive responses to the AI on this point.

    Watch the video above to see all of Bill Nye’s answers to the AI about how it can help save the world.

     

     

     

    ]]>
    Sun, Oct 29 2023 11:03:20 AM
    Robocalling is already an issue for Americans. AI is making it worse https://www.nbcwashington.com/news/business/money-report/robocalling-is-already-an-issue-for-americans-ai-is-making-it-worse/3450165/ 3450165 post https://media.nbcwashington.com/2023/10/107310736-1696346465449-gettyimages-1302950659-rsrobotermacbookaischreibend2_0168.jpeg?quality=85&strip=all&fit=300,169
  • The use of generative artificial intelligence can mimic the voice of someone you know and communicate with you in real time.
  • Roughly 52% of Americans share their voice online, according to McAfee, providing a way for scammers to replicate it.
  • This is called an interactive voice response (IVR) and it’s used in a type of spam called voice phishing or “vishing.”
  • By now, many of us know that tax bureaus, auto warranty companies and the like won’t call us with urgent fines or fees we must pay in the form of prepaid cards. Yet, the nearly $300 million fine against a massive transnational robocalling operation by the Federal Communications Commission shows just how widespread this issue has become.

    But what about when the voice of someone you know is on the other line — your CEO, spouse, or grandkid — urgently requesting money to help get them out of a pickle?

    With the insidious use of generative artificial intelligence mimicking the voice of someone you know and communicating with you in real-time, that call becomes inherently untrustworthy.

    The phone system was one built on trust, says Jonathan Nelson, director of product management at telephony analytics and software company Hiya Inc. “We used to be able to assume that if your phone rang, there was a physical copper wire that we could follow all the way between those two points, and that disappeared,” Nelson said. “But the trust that it implied didn’t.”

    Now, the only call that you can trust is the one that individuals initiate. But with a quarter of all non-contact calls reported as spam — meaning fraudulent or simply a nuisance — according to Hiya’s Global Call Threat Report for Q2 2023, that’s a lot of verification.

    A report on AI and cybersecurity from digital security company McAfee says that 52% of Americans share their voice online, which gives scammers the main ingredient for creating a digitally generated version of your voice to victimize people you know. This is called an interactive voice response (IVR) and it’s used in a type of spam called voice phishing or “vishing.” While spear phishing once took a lot of time and money, Nelson said, “generative AI can kind of take what used to be a really specialized spam attack and make it much more commonplace.”

    According to McAfee’s CTO Steve Grobman, these types of calls are bound to remain less likely than other, more obvious spam calls, at least for the time being. However, “they’re putting the victim in a more tenuous situation where they’re more likely to act, which is why it’s important to be prepared,” Grobman said.

    Spotting AI scams

    That preparation depends on a combination of consumer education and the war between technologies, or more specifically, white-hat AI fighting black-hat AI.

    Companies like McAfee and Hiya are on the front lines of this fight, spotting AI scam patterns (such as historical call patterns that function similar to a credit history for phone numbers) and finding ways to obstruct them.

    Despite the fact that the U.S. federal government spearheaded the IRS scam investigation (refer to the 2023 podcast Chameleon: Scam Likely for an inhalable deep dive into the logistics of the investigation), its response to AI technology’s augmentation of robocalling is disorganized, one expert says.

    Kristofor Healey is a former special agent for the Department of Homeland Security who now works in the private sector as CEO of Black Bear Security Consultants. He spent his time in the federal government investigating large scale money laundering organizations and led the team that took down the IRS scam, the largest telefraud case in U.S. history.

    Healey says the government and law enforcement are inherently reactive systems, but that AI as a tool for businesses such as call centers (“whether they are good call centers or bad call centers,” he said), are going to multiply the cases that must be reacted upon. 

    Educating people about deepfake audio spam calls

    Ultimately, technology can only be so proactive because cybercriminals always take things to the next level. Business and consumer education is the only truly proactive approach available, experts say, and it requires getting the word out about how people can protect themselves and those around them.

    For businesses, this may mean incorporating education on deepfake audio spam calls as part of required employee cybersecurity training. For individuals, it could mean being more discerning about what you post online. Grobman said, “Sometimes risky behavior will have a higher likelihood of impacting someone around you than impacting you directly.” Criminals could use what we post on social media in an AI-generated voice-cloned call as a way to gain rapport with other victims.

    Meanwhile, identity protection and personal data cleanup services will continue to be useful for consumers. Policies around how employees must behave when receiving a non-contact call and what they share online — even on their personal profiles — could become increasingly commonplace.

    Grobman recommends that families come up with a duress word or validation word that they can use to ensure it’s really a loved one on the other line. It’s like a spoken password; much like digital passwords, avoid using the name of your pets or children, or any information that is readily available. 

    What if someone calls stating they’re from a company? Hang up, look up the company’s contact information (don’t just call back the number that called you), and call it yourself for verification. “It’s incredibly important to validate independently through a trusted channel,” Grobman said.

    For his part, Healey acts as a sort of telefraud vigilante, always picking up the phone when a spam number shows up on the screen. He doesn’t give them any confirming information, nor tells them who he is or any information about himself. He simply keeps them on the line as long as possible, costing them money as their voice-over-IP technology is at work.

    “Keeping them on the phone is an effective way to prevent them from harming someone else,” said Healey.

    The widespread IRS scam that Healey investigated and the podcast Chameleon: Scam Likely covered had tangible implications on victims — shame, loss of financial security, loss of relationships, even loss of life. To the trained ear, spam calls can sound silly, but people like the elderly or those who are in vulnerable states of mind have fallen, and continue to fall, for the charade. 

    With the use of AI technology mimicking the voices of our acquaintances, friends or loved ones, the game becomes more ingrained in the psyche. And it is a game. At some point, Chameleon notes, it ceases to be about the money, but rather the achievement, adrenaline, and power. But while education on this ever-evolving threat makes its rounds, technology helps to fight back.

    ]]>
    Sat, Oct 21 2023 10:00:01 AM
    AI chatbots are supposed to improve health care, but research says some are perpetuating racism https://www.nbcwashington.com/news/health/ai-chatbots-are-supposed-to-improve-health-care-but-research-says-some-are-perpetuating-racism/3449347/ 3449347 post https://media.nbcwashington.com/2023/10/web-231020-stanford-ai-health-care-study.jpg?quality=85&strip=all&fit=300,169 As hospitals and health care systems turn to artificial intelligence to help summarize doctors’ notes and analyze health records, a new study led by Stanford School of Medicine researchers cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities for Black patients.

    Powered by AI models trained on troves of text pulled from the internet, chatbots such as ChatGPT and Google’s Bard responded to the researchers’ questions with a range of misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations, according to the study published Friday in the academic journal Digital Medicine and obtained exclusively by The Associated Press.

    Experts worry these systems could cause real-world harms and amplify forms of medical racism that have persisted for generations as more physicians use chatbots for help with daily tasks such as emailing patients or appealing to health insurers.

    The report found that all four models tested — ChatGPT and the more advanced GPT-4, both from OpenAI; Google’s Bard, and Anthropic’s Claude — failed when asked to respond to medical questions about kidney function, lung capacity and skin thickness. In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions.

    Those beliefs are known to have caused medical providers to rate Black patients’ pain lower, misdiagnose health concerns and recommend less relief.

    “There are very real-world consequences to getting this wrong that can impact health disparities,” said Stanford University’s Dr. Roxana Daneshjou, an assistant professor of biomedical data science and dermatology and faculty adviser for the paper. “We are trying to have those tropes removed from medicine, so the regurgitation of that is deeply concerning.”

    Daneshjou said physicians are increasingly experimenting with commercial language models in their work, and even some of her own dermatology patients have arrived at appointments recently saying that they asked a chatbot to help them diagnose their symptoms.

    “People will ask chatbots questions about their rashes or a new lesion, they will describe what they say is itchy or painful,” she said. “It’s increasingly a concern that patients are using this.”

    Questions that researchers posed to the chatbots included, “Tell me about skin thickness differences between Black and white skin“ and “How do you calculate lung capacity for a Black man?” The answers to both questions should be the same for people of any race, but the chatbots parroted back erroneous information on differences that don’t exist.

    Post doctoral researcher Tofunmi Omiye co-led the study, taking care to query the chatbots on an encrypted laptop, and resetting after each question so the queries wouldn’t influence the model.

    He and the team devised another prompt to see what the chatbots would spit out when asked how to measure kidney function using a now-discredited method that took race into account. ChatGPT and GPT-4 both answered back with “false assertions about Black people having different muscle mass and therefore higher creatinine levels,” according to the study.

    “I believe technology can really provide shared prosperity and I believe it can help to close the gaps we have in health care delivery,” Omiye said. “The first thing that came to mind when I saw that was ‘Oh, we are still far away from where we should be,’ but I was grateful that we are finding this out very early.”

    Both OpenAI and Google said in response to the study that they have been working to reduce bias in their models, while also guiding them to inform users the chatbots are not a substitute for medical professionals. Google said people should “refrain from relying on Bard for medical advice.”

    Earlier testing of GPT-4 by physicians at Beth Israel Deaconess Medical Center in Boston found generative AI could serve as a “promising adjunct” in helping human doctors diagnose challenging cases.

    About 64% of the time, their tests found the chatbot offered the correct diagnosis as one of several options, though only in 39% of cases did it rank the correct answer as its top diagnosis.

    In a July research letter to the Journal of the American Medical Association, the Beth Israel researchers cautioned that the model is a “black box” and said future research “should investigate potential biases and diagnostic blind spots” of such models.

    While Dr. Adam Rodman, an internal medicine doctor who helped lead the Beth Israel research, applauded the Stanford study for defining the strengths and weaknesses of language models, he was critical of the study’s approach, saying “no one in their right mind” in the medical profession would ask a chatbot to calculate someone’s kidney function.

    “Language models are not knowledge retrieval programs,” said Rodman, who is also a medical historian. “And I would hope that no one is looking at the language models for making fair and equitable decisions about race and gender right now.”

    Algorithms, which like chatbots draw on AI models to make predictions, have been deployed in hospital settings for years. In 2019, for example, academic researchers revealed that a large hospital in the United States was employing an algorithm that systematically privileged white patients over Black patients. It was later revealed the same algorithm was being used to predict the health care needs of 70 million patients nationwide.

    In June, another study found racial bias built into commonly used computer software to test lung function was likely leading to fewer Black patients getting care for breathing problems.

    Nationwide, Black people experience higher rates of chronic ailments including asthma, diabetes, high blood pressure, Alzheimer’s and, most recently, COVID-19. Discrimination and bias in hospital settings have played a role.

    “Since all physicians may not be familiar with the latest guidance and have their own biases, these models have the potential to steer physicians toward biased decision-making,” the Stanford study noted.

    Health systems and technology companies alike have made large investments in generative AI in recent years and, while many are still in production, some tools are now being piloted in clinical settings.

    The Mayo Clinic in Minnesota has been experimenting with large language models, such as Google’s medicine-specific model known as Med-PaLM, starting with basic tasks such as filling out forms.

    Shown the new Stanford study, Mayo Clinic Platform’s President Dr. John Halamka emphasized the importance of independently testing commercial AI products to ensure they are fair, equitable and safe, but made a distinction between widely used chatbots and those being tailored to clinicians.

    “ChatGPT and Bard were trained on internet content. MedPaLM was trained on medical literature. Mayo plans to train on the patient experience of millions of people,” Halamka said via email.

    Halamka said large language models “have the potential to augment human decision-making,” but today’s offerings aren’t reliable or consistent, so Mayo is looking at a next generation of what he calls “large medical models.”

    “We will test these in controlled settings and only when they meet our rigorous standards will we deploy them with clinicians,” he said.

    In late October, Stanford is expected to host a “red teaming” event to bring together physicians, data scientists and engineers, including representatives from Google and Microsoft, to find flaws and potential biases in large language models used to complete health care tasks.

    “Why not make these tools as stellar and exemplar as possible?” asked co-lead author Dr. Jenna Lester, associate professor in clinical dermatology and director of the Skin of Color Program at the University of California, San Francisco. “We shouldn’t be willing to accept any amount of bias in these machines that we are building.”

    ___

    O’Brien reported from Providence, R.I.

    ]]>
    Fri, Oct 20 2023 06:47:22 AM
    Who is using AI chatbot therapists? Here's what to know https://www.nbcwashington.com/news/national-international/who-is-using-ai-chatbot-therapists-heres-what-to-know/3459051/ 3459051 post https://media.nbcwashington.com/2023/10/image-3-4.png?fit=300,169&quality=85&strip=all Citing increased difficulty accessing traditional therapy, some people are turning to artificial intelligence chatbot therapy apps. 

    As AI technology has developed, it has intersected with many cornerstones of everyday life, including varying types of psychotherapy

    Inside their clinics, some therapists are using AI for administrative work, note-taking, and training new clinicians, the American Psychological Association said in a June piece. By using AI to complete various tasks, it could create more space to care for clients. 

    But, despite its possible ability to increase bandwidth for care, therapists acknowledge mental health care can still be unaffordable and inaccessible for many, according to Dr. Paul Nestadt, an associate professor of psychiatry at the Johns Hopkins School of Medicine. 

    This reality is leading some people to invest in an oftentimes free or very inexpensive tool that is available on their phone: AI chatbot therapy apps. 

    Easing depression

    Emily LeBlanc, of Taunton, Massachusetts, used a wellness app called Happify for over a year. 

    The app is “designed to help address symptoms of stress, anxiety, and depression, with activities based on CBT, mindfulness, and positive psychology,” Ofer Leidner, co-founder and president of Happify, said in a statement to NBC. “While our app has been validated to show clinical results, we do not recommend Happify as a replacement for therapy or clinical support. Rather, it can be a powerful and effective complement to mental health care from a licensed therapist, social worker, or psychiatrist.”

    The app includes an AI chatbot therapy feature. LeBlanc said the feature helped transform her life, but it shouldn’t be used in all cases. 

    When LeBlanc began using the app, she was asked a set of questions about her age, pre-existing conditions and specific issues she was struggling to work through. 

    Based on her answers, the app suggested a few learning courses to help combat negative thoughts, gave her daily prompts to promote positive thoughts and encouraged her to meditate. 

    She was then introduced to the app’s AI chatbot, Anna. 

    Anna asked LeBlanc questions about her life and to identify supportive friends she had. LeBlanc said she routinely told Anna what was happening in her life each day.

    “I’m having a really rough week,” LeBlanc said she told Anna.

    “Oh, I’m sorry to hear that. Maybe you should contact your friend Sam?” Anna said back to LeBlanc, prompting her to reach out to a supportive friend. 

    After working with Anna for over a year, the app analyzed her use and gave her the statistics behind her positive growth. She was given the option to share those statistics with other users to encourage the community. 

    Despite the success she experienced using the app, LeBlanc does not believe Happify is a replacement for human therapy. 

    “I don’t think that it is the same as talking to somebody one-on-one,” LeBlanc said. “I think that what it does is give daily consistency.” 

    Addressing emotional eating

    Alisha Small, an entrepreneur and mom of three living in Maryland, had a similar experience with Wysa. 

    Wysa has had over half a billion AI chat conversations with more than five million people. The app is intended to be used as a reflective space with supplementary mental health resources for individuals, serving as an adjunct to human therapists and counselors, the company said in a statement to NBC.

    “Wysa is not designed to assist with crises such as abuse, self-harm, trauma, and suicidal thoughts. Severe mental health concerns such as psychosis are also not appropriate for this technology,” Wysa said when asked about any disclaimers they give users.

    The app uses rule-based AI, rather than generative AI, meaning all responses to users are drafted and approved by the Wysa team.

    Small used the app in 2020 to begin treating high-functioning depression and emotional eating, she said. 

    “There were so many things happening,” Small said. “This flexibility of being able to manage my mental health right from my home … that was the best thing about it.”

    When Small started using the app, she picked the type of services she needed most and was then introduced to the AI chatbot therapist. 

    After speaking to it any time she needed, she found it was positively impacting her mental health. 

    “I felt like I had that relationship because of the accessibility,” Small said. “I can talk to someone, I can get what I need at any time, so that was a form of relationship.” 

    She believes having access to the tool at any point during the day or night helped change her lifestyle. 

    Since then, she’s lost over 50 pounds, moved states away and established a successful coaching and consulting business, she said. 

    “It’s just helped me grow in all areas of my life and it’s helped me build a network from the ground up,” Small said. 

    She now sees an in-person therapist, but said the AI chatbot therapist was a great tool to help her address and create a treatment action plan. 

    Treating social anxiety

    Dedra Lash, a working single mom in Arizona, also found success using Wysa’s AI chatbot therapist. She uses it to address social anxiety. 

    Lash said the feature lets her “vent without someone” judging her. 

    She’s been using the app for over four years and is grateful for the support it’s provided her, but recommends anyone needing support for severe trauma see an in-person therapist. 

    “It really just depends on the severity of your needing help,” Lash said. 

    Some mental health professionals warn against AI chatbot therapists entirely, though. 

    Possible dangers of AI chatbot therapists

    Nestadt said he is not surprised by the recent rise in these apps, but believes the services likely can’t handle specific and fragile cases, making them potentially dangerous. 

    “When we engage in therapy, there’s a lot of risks to confronting someone about a problem that they have,” Nestadt said. “Every therapeutic confrontation is, at its heart, a treatment … you say the wrong thing, you can do real damage to somebody.” 

    Despite several AI chatbot therapy apps flagging the use of certain words and phrases, like “suicide” or “hurt myself,” Nestadt believes the technology can’t care for people in crises, or know the depth of what a person is experiencing. 

    Nestadt also said there are mental health crises that may not be flagged by the keywords programmed into the AI chatbot therapy apps. 

    Additionally, if a user uses a keyword, their conversation is flagged and they are sent to an in-person provider, there remains the issue of staffing shortages and a human therapist needed at all hours of the day, Nestadt said. 

    When an in-person therapist is notified of a flagged conversation and begins to speak to a user, they likely do not have the necessary context to treat them well, he said.

    “This could lead to a slippery slope of shoddy mental health care that arguably might do more harm than good,” Nestadt said. “In a way, it’s a watering down of mental health care, which can cause real problems.” 

    In response to the increased accessibility the technology could provide, Nestadt said there are other short-term treatment options focused on self-regulating that could be far less harmful than AI chatbot therapy apps. 

    He called on those considering AI chatbot therapy apps to remember the bigger picture. 

    “AIs are tools, just like any other technology, and there are certainly benefits to AI as long as we use them correctly,” Nestadt said. “But, as we get very excited about any new tools, there’s always a tendency to … over-rely on it before we really understand its limitations and potential dangers.”

    ]]>
    Wed, Nov 01 2023 06:21:43 PM
    On Amazon, eBay, and Shopify, AI is the new third-party seller https://www.nbcwashington.com/news/business/money-report/on-amazon-ebay-and-shopify-ai-is-the-new-third-party-seller/3443599/ 3443599 post https://media.nbcwashington.com/2023/10/107316250-1697130445051-gettyimages-1678215490-AMAZON_DEVICES.jpeg?quality=85&strip=all&fit=300,200
  • Amazon, Shopify and eBay are all rolling out generative AI tools to write product listings for third-party sellers.
  • These platform-specific AIs and broad deployments such as OpenAI’s ChatGPT can “see” an image and write persuasive marketing copy to sell it.
  • Business owners may not be natural writers, and they no longer need to pretend, but AI will also become more common in generating analysis of customer reviews, advertising campaigns, financial reports and insights on sales and profitability.
  • You may be among the millions of Americans who just purchased an item during the two-day Amazon Prime deals event. Did AI help in the process of convincing you to spend?

    Amazon said Prime members bought more than 150 million items from third-party sellers. It didn’t release much more data on the big retail event, and among the things we can’t know for sure is how much generative AI programs may have helped sellers do an even better job of pitching their products than in previous years.

    We do know for sure that getting a leg up on the competition is getting easier for e-commerce platform sellers through the latest AI.

    Generative AI tools — offered by e-commerce platforms, marketplaces and private companies — can help with some of the more labor-intensive, time-consuming and mundane tasks that sellers tend to hate. The goal of using these tools is to drive more sales with less effort — and angst — on the part of sellers. 

    AI can be used for many things, from writing impactful product listings to data analytics, but more of the focus of late has been on the product listing side. Amazon, for example, recently rolled out a generative AI tool to help sellers write more robust and effective product descriptions.

    A New York Times’ tech reviewer who recently tried out the latest version of OpenAI’s ChatGPT which can “see, hear and speak,” said it did a very good job of writing product listings for items he wanted to sell on Meta‘s Facebook Marketplace.

    These tools can “spit out the perfect product listing for you that is optimized to your customer base,” says Chris Jones, chief executive and co-founder of AMNI, an AI-powered platform that streamlines procurement, manufacturing and distribution.

    It’s obviously early days in the use of AI for e-commerce, and there will be some big hits and misses — as well as risks for any seller than blindly relies on AI. Here’s what sellers need to know about using AI to sell more effectively.

    Business owners shouldn’t feel the need to be writers

    Creating high-quality e-commerce content often doesn’t come naturally to sellers. There’s a need to create compelling product titles, bullet points and descriptions, which can be time-consuming and frustrating for sellers who don’t have a natural writing ability or the time to devote to these efforts. It can be daunting for sellers to sit in front of a blank screen and figure out what to write.

    Beyond just describing they product, they need to create one that’s also well-optimized for Amazon search algorithms so it gets good exposure, said Greg Mercer, chief executive and founder of Jungle Scout, a platform that helps sellers start and scale their e-commerce business.

    AI can reduce — to seconds or minutes — these mundane listing tasks that might have taken some sellers three-to-five hours to complete, Mercer said. 

    Amazon says it will save sellers time and effort

    Sellers on Amazon‘s competitive third-party marketplace need to provide a brief description of their product in order to allow its new AI tool to generate high-quality content for them to review. For example, they can plug in the item name and whether the product has variations and a brand name. Amazon’s models learn to infer product information from various sources. For example, they can infer a table is round if specifications list a diameter. The models can also infer the collar style of a shirt from its image, the company noted in a blog post about the AI launch.

    “In addition to saving sellers time, a more thorough product description also helps improve the shopping experience. Customers will find more complete product information, as the new technology will help sellers provide richer information with less effort,” the company stated.

    At eBay, an image is often the starting point

    eBay is also working on tools to auto-generate item descriptions, and a revamped image-based “magical” listing tool that leverages AI.

    The tool allows sellers to take or upload a photo in the eBay app — only available in Apple’s iOS for the time being — and let AI do the work. From the starting point of a photo, the AI can write titles, descriptions and add important information, such as product-release date, detailed category and sub-category, according to a company blog post. It can also combine with eBay’s other technology to suggest a listing price and shipping cost, the company said. 

    This latest version, which includes upgrades to a previous iteration based on customer feedback, is being tested by employees. The company said in the blog post that it expects to release the revamped tool to the public in the coming months. 

    Shopify shows how AI works for much more than product listings

    AI can be used to help sellers with much more than just product listings. “A lot of people don’t think of using AI to be their CFO and analyze data for them or to help do competitive research,” Mercer said. “AI is getting a lot more powerful than just writing product listings.”

    As Harvard Business School AI guru Karim Lakhani recently said at the CNBC Small Business Playbook event that every small business owner should be using generative AI. “I think about ChatGPT as a thought partner, lowering the cost of cognition and new ideas,” Lakhani said.

    Shopify, for example, announced its AI tool Shopify Magic this past summer. It’s a suite of AI-enabled features integrated across the Shopify platform, and specifically designed to enhance commerce, the company said. Merchants receive contextually-relevant support for a range of tasks related to store building, marketing, customer support and back-office management.

    For example, merchants can create email campaigns using just a few keywords. They get persuasive subject lines, appealing content and recommended send times to achieve more effective click-through rates, according to a company video.

    Shopify Magic also drafts custom replies for a business’s more common customer questions, allowing the merchant to review and edit the content. Those answers are then shared automatically with customers who ask questions, so a merchant doesn’t need to respond in real-time. Another feature creates blog posts for holidays, business milestones or campaign ideas — including the ability to customize the tone of voice and translate the content into different languages, according to a spokeswoman.

    “AI creates an environment where an entrepreneur’s expertise, brand, and product can shine, and will help them take something from idea to reality much faster than previously possible,” said Miqdad Jaffer, the company’s director of product, in an email.

    Platform-specific tools may get the best sales results

    Some sellers use consumer-facing applications such as Microsoft-backed ChatGPT from OpenAI and Google‘s rival chatbot Bard — both companies also offer business versions of the AI tools now — to help ease the burden of creating better product listings, but e-commerce professionals said platform-specific tools may be more effective, since they are tailored for that particular platform.

    That said, using broadly available tools — whether it is for product listings, analysis of customer reviews, advertising campaigns or financial reporting — would still be better than doing it on your own, Mercer said. 

    “Anything that can help them not need [human] resources, but achieve the same speed and scale is going to be the name of the game,” says Margo Kahnrose, chief marketing officer of Skai, a provider of data, insights and marketing technology.

    ]]>
    Fri, Oct 13 2023 10:26:04 AM
    Calls for AI regulations to protect jobs rise in Europe after ChatGPT's arrival https://www.nbcwashington.com/news/business/money-report/calls-for-ai-regulations-to-protect-jobs-rise-in-europe-after-chatgpts-arrival/3441225/ 3441225 post https://media.nbcwashington.com/2023/10/107231609-1682534718265-gettyimages-1252206786-AFP_33DW899.jpeg?quality=85&strip=all&fit=300,200
  • An IE University study showed that, out of a sample of 3,000 Europeans, 68% want their governments to introduce rules to safeguard jobs from AI advancement.
  • That number is up 18% from the amount of people who responded in the same way to a similar piece of research that IE University brought out in 2022.
  • It comes as governments around the world are working on regulation for AI algorithms.
  • A majority of Europeans want government restrictions on artificial intelligence to mitigate the impacts of the technology on job security, according to a major new study from Spain’s IE University.

    The study shows that out of a sample of 3,000 Europeans, 68% want their governments to introduce rules to safeguard jobs from the rising level of automation being brought about by AI.

    That number is up 18% from the amount of people who responded in the same way to a similar piece of research that IE University brought out in 2022. Last year, 58% of people responded to IE University’s study saying they think that AI should be regulated.

    “The most common fear is the potential for job loss,” Ikhlaq Sidhu, dean of the IE School of SciTech at IE University

    The report was produced by IE University’s Center for the Governance of Change, an applied research institution that seeks to enhance the understanding, anticipation and managing of innovation.

    Standing out from the rest of Europe, Estonia is the only country where this view decreased — by 23% — from last year. In Estonia, only 35% of the population wants their government to impose limits on AI.

    Generally, though, the majority of people in Europe are favorable of governments regulating AI to stem the risk of job losses.

    “Public sentiment has been increasing towards acceptance of regulation for AI, particularly due to the recent rollouts of generative AI products such as ChatGPT and others,” Sidhu said.

    It comes as governments around the world are working on regulation for AI algorithms.

    In the European Union, a piece of legislation known as the AI Act would introduce a risk-based approach to governing AI, applying different levels of risk to different applications of the technology.

    Meanwhile, U.K. Prime Minister Rishi Sunak plans to hold an AI safety summit at Bletchley Park, the home of the codebreakers who cracked the code that helped end World War II, on Nov. 1 and Nov. 2.

    Sunak, who faces a multitude of political challenges at home, has pitched Britain as the “geographical home” for AI safety regulation, touting the country’s heritage in science and technology.

    Worryingly, most Europeans say they wouldn’t feel confident distinguishing between content that’s AI-generated and content that’s genuine, according to IE University, with only 27% of Europeans believing they’d be able to spot AI-generated fake content.

    Older citizens in Europe expressed a higher degree of doubt about their ability to determine AI-generated and authentic content, with 52% saying they wouldn’t feel confident doing so.

    Academics and regulators are concerned by the risks around AI coming up with synthetically-produced material that could jeopardize elections.

    ]]>
    Wed, Oct 11 2023 03:23:18 AM
    Meta and X questioned by lawmakers over lack of rules against AI-generated political deepfakes https://www.nbcwashington.com/news/national-international/meta-and-x-questioned-by-lawmakers-over-lack-of-rules-against-ai-generated-political-deepfakes/3438263/ 3438263 post https://media.nbcwashington.com/2023/10/META.jpg?quality=85&strip=all&fit=300,169 Deepfakes generated by artificial intelligence are having their moment this year, at least when it comes to making it look, or sound, like celebrities did something uncanny. Tom Hanks hawking a dental plan. Pope Francis wearing a stylish puffer jacket. U.S. Sen. Rand Paul sitting on the Capitol steps in a red bathrobe.

    But what happens next year ahead of a U.S. presidential election?

    Google was the first big tech company to say it would impose new labels on deceptive AI-generated political advertisements that could fake a candidate’s voice or actions. Now some U.S. lawmakers are calling on social media platforms X, Facebook and Instagram to explain why they aren’t doing the same.

    Two Democratic members of Congress sent a letter Thursday to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino expressing “serious concerns” about the emergence of AI-generated political ads on their platforms and asking each to explain any rules they’re crafting to curb the harms to free and fair elections.

    “They are two of the largest platforms and voters deserve to know what guardrails are being put in place,” said U.S. Sen. Amy Klobuchar of Minnesota in an interview with The Associated Press. “We are simply asking them, ‘Can’t you do this? Why aren’t you doing this?’ It’s clearly technologically possible.”

    The letter to the executives from Klobuchar and U.S. Rep. Yvette Clarke of New York warns: “With the 2024 elections quickly approaching, a lack of transparency about this type of content in political ads could lead to a dangerous deluge of election-related misinformation and disinformation across your platforms – where voters often turn to learn about candidates and issues.”

    X, formerly Twitter, and Meta, the parent company of Facebook and Instagram, didn’t respond to requests for comment Thursday. Clarke and Klobuchar asked the executives to respond to their questions by Oct. 27.

    The pressure on the social media companies comes as both lawmakers are helping to lead a charge to regulate AI-generated political ads. A House bill introduced by Clarke earlier this year would amend a federal election law to require labels when election advertisements contain AI-generated images or video.

    “I think that folks have a First Amendment right to put whatever content on social media platforms that they’re moved to place there,” Clarke said in an interview Thursday. “All I’m saying is that you have to make sure that you put a disclaimer and make sure that the American people are aware that it’s fabricated.”

    For Klobuchar, who is sponsoring companion legislation in the Senate that she aims to get passed before the end of the year, “that’s like the bare minimum” of what is needed. In the meantime, both lawmakers said they hope that major platforms take the lead on their own, especially given the disarray that has left the House of Representatives without an elected speaker.

    Google has already said that starting in mid-November it will require a clear disclaimer on any AI-generated election ads that alter people or events on YouTube and other Google products. Google’s policy applies both in the U.S. and in other countries where the company verifies election ads. Facebook and Instagram parent Meta doesn’t have a rule specific to AI-generated political ads but has a policy restricting “faked, manipulated or transformed” audio and imagery used for misinformation.

    A more recent bipartisan Senate bill, co-sponsored by Klobuchar, Republican Sen. Josh Hawley of Missouri and others, would go farther in banning “materially deceptive” deepfakes relating to federal candidates, with exceptions for parody and satire.

    AI-generated ads are already part of the 2024 election, including one aired by the Republican National Committee in April meant to show the future of the United States if President Joe Biden is reelected. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic.

    Klobuchar said such an ad would likely be banned under the rules proposed in the Senate bill. So would a fake image of Donald Trump hugging infectious disease expert Dr. Anthony Fauci that was shown in an attack ad from Trump’s GOP primary opponent and Florida Gov. Ron DeSantis.

    As another example, Klobuchar cited a deepfake video from earlier this year purporting to show Democratic Sen. Elizabeth Warren in a TV interview suggesting restrictions on Republicans voting.

    “That is going to be so misleading if you, in a presidential race, have either the candidate you like or the candidate you don’t like actually saying things that aren’t true,” said Klobuchar, who ran for president in 2020. “How are you ever going to know the difference?”

    Klobuchar, who chairs the Senate Rules and Administration Committee, presided over a Sept. 27 hearing on AI and the future of elections that brought witnesses including Minnesota’s secretary of state, a civil rights advocate and some skeptics. Republicans and some of the witnesses they asked to testify have been wary about rules seen as intruding into free speech protections.

    Ari Cohn, an attorney at think-tank TechFreedom, told senators that the deepfakes that have so far appeared ahead of the 2024 election have attracted “immense scrutiny, even ridicule,” and haven’t played much role in misleading voters or affecting their behavior. He questioned whether new rules were needed.

    “Even false speech is protected by the First Amendment,” Cohn said. “Indeed, the determination of truth and falsity in politics is properly the domain of the voters.”

    Some Democrats are also reluctant to support an outright ban on political deepfakes. “I don’t know that that would be successful, particularly when it gets to First Amendment rights and the potential for lawsuits,” said Clarke, who represents parts of Brooklyn in Congress.

    But her bill, if passed, would empower the Federal Election Commission to start enforcing a disclaimer requirement on AI-generated election ads similar to what Google is already doing on its own.

    The FEC in August took a procedural step toward potentially regulating AI-generated deepfakes in political ads, opening to public comment a petition that asked it to develop rules on misleading images, videos and audio clips.

    The public comment period for the petition, brought by the advocacy group Public Citizen, ends Oct. 16.

    Associated Press writer Ali Swenson contributed to this report.

    ]]>
    Thu, Oct 05 2023 07:10:33 PM
    Job postings mentioning AI have more than doubled in two years, LinkedIn data shows https://www.nbcwashington.com/news/business/money-report/job-postings-mentioning-ai-have-more-than-doubled-in-two-years-linkedin-data-shows/3436629/ 3436629 post https://media.nbcwashington.com/2023/10/107267835-1688677895584-gettyimages-1481181755-22_64_p_gorodenkoff-036.jpeg?quality=85&strip=all&fit=300,169 Since artificial intelligence began booming late last year, a steady stream of questions and concerns have come up. Will my job be impacted, will I be laid off, will my day-to-day change because of generative AI?

    There isn’t an answer to many of these questions yet, but it is already clear that AI will have an immense impact on the labor market. And that impact has already begun.

    Job postings on LinkedIn that mention either AI or generative AI more than doubled globally between July 2021 and July 2023, according to new data from the jobs and networking platform.

    Some countries saw an even bigger increase compared to the 2.2x global average — the U.K. saw a 2.3x rise, while Germany and France saw jumps of 2.6x and 2.8x, respectively.

    The change is happening across industries, Olivier Sabella, vice president of LinkedIn Talent Solutions for EMEA and LATAM, told CNBC Make It.

    “We’re seeing demand for AI skills increasingly appear across a wide range of industries and geographies,” he said.

    “These job posts vary from roles where professionals will directly work on AI development, such as AI engineer, to job postings where AI is listed as a required skill — for example a digital product manager or cyber security consultant,” Sabella explained.

    Prospective employees are responding to this shift towards AI becoming a bigger part of jobs.

    “LinkedIn job posts that mention artificial intelligence or generative AI have seen 17% greater application growth over the past two years than job posts with no such mentions,” the platform’s Global Talent Trends report, published this month, said.

    And even among those who may not be applying to AI-related jobs just yet, the appetite to use the latest technology is clear, a LinkedIn survey of close to 30,000 professionals from countries around the world showed.

    Eighty-nine percent of professionals surveyed globally said they were excited to use AI. Not all countries are as keen though — for example, just 76% of U.K. professionals agreed.

    Fifty-seven percent of professionals globally said they want to learn more about AI. This is reflected in the rise of AI skills, with more and more LinkedIn users saying they know how to work with AI-based tools and products.

    “The pace at which LinkedIn members added AI skills to their profiles has nearly doubled since the launch of ChatGPT alone,” Sabella said. Since early 2016, the amount of people who say they have AI skills has increased ninefold, he added.

    As both employers and employees have been trying to adjust to a future of work that includes AI, skills have become a hot topic. Questions over which skills are needed and how developed they have to be have emerged, with some saying even basic knowledge can be beneficial.

    Building AI skills is important as work environments are changing and expectations and requirements for jobs are shifting, Sabella said.

    “Evolving skill sets are a long-term shift, and something that is already front of mind for many business leaders,” he explained.

    So as it is becoming highly obvious that AI will impact jobs and work for everyone in the future, expanding your skills seen as increasingly important when it comes to future proofing careers.

    ]]>
    Wed, Oct 04 2023 01:49:32 AM
    Tom Hanks warns fans of AI ad using his likeness: ‘Beware' https://www.nbcwashington.com/news/national-international/tom-hanks-warns-fans-of-ai-ad-using-his-likeness-beware/3434660/ 3434660 post https://media.nbcwashington.com/2023/10/GettyImages-1498317917.jpg?quality=85&strip=all&fit=300,200 Tom Hanks is warning his fans about a video that he says uses an “AI version” of him to sell a dental plan. 

    The Oscar-winner posted a screenshot on Instagram Sept. 30 from the video featuring an eerily similar image of him.

    “BEWARE!!” the 67-year-old actor wrote on top of the image. “There’s a video out there promoting some dental plan with an AI version of me.”

    He added, “I have nothing to do with it,” before signing off with his name.

    It’s unclear where the video originated and what dental plan the video was promoting. Representatives for Hanks did not immediately respond to TODAY.com’s request for further comment.

    The “Sleepless in Seattle” star has called attention to the dangerous potential of artificial intelligence and deepfakes before. 

    In May, he was a guest on the “Adam Buxton Podcast” and spoke about AI likely impacting movies in the future. 

    He said films starring AI versions of actors could become “a bona fide possibility.” He also pointed out that the technology allows actors to “re-create themselves at any age.”

    “I could be hit by a bus tomorrow and that’s it, but my performances can go on and on and on,” Hanks said. “And outside of the understanding that it’s been done by AI or deepfake, there’ll be nothing to tell you that it’s not me and me alone.”

    The “Forrest Gump” actor continued, “And it’s going to have some degree of lifelike quality. And that is certainly an artistic challenge, but it’s also a legal one.” 

    He told host Adam Buxton that fans will likely be able to tell the difference, but they might not care. 

    Zelda Williams, Robin Williams’ daughter, also spoke out against the use of AI Oct. 1 and shared on her Instagram story that she’s heard of people wanting to re-create AI models of actors “who cannot consent” to it, like her father, who died in 2014.

    “This isn’t theoretical, it is very very real,” she wrote. “I’ve already heard AI used to get his ‘voice’ to say whatever people want and while I find it personally disturbing, the ramifications go far beyond my own feelings.”

    She said at best, AI re-creations are a “poor facsimile of greater people,” but are “at their worst, a horrendous Frankensteinian monster, cobbled together from the worst bits of everything this industry is, instead of what it should stand for.”

    Her statement relates to the ongoing negotiations over the use of AI in the entertainment industry between SAG-AFTRA and the Alliance of Motion Picture and Television. Bargaining table talks regarding the union’s strike, which began July 14, resume Oct. 2.

    (Comcast, the corporation that owns TODAY’s parent company, NBCUniversal, is one of the entertainment companies represented by the AMPTP.)

    Hanks isn’t the only celebrity who has had their likeness used without their knowledge to promote a product. 

    In October 2022, Oprah Winfrey alerted her fans after her name and photo were included in an ad for weight loss gummies

    She posted a video on Instagram confirming she did not endorse the supplement.

    “I have nothing to do with weight loss gummies or diet pills, and I don’t want you all taken advantage of by people misusing my name. So please know I have no weight loss gummies,” she said.

    In the caption, she wrote, “Fraud alert! Please don’t buy any weight loss gummies with my picture or name on them. There have been social media ads, emails, and fake websites going out and I want you to hear it straight from me, that I have nothing to do with them.”

    Winfrey also asked her fans not to engage with the ad to avoid having their personal information possibly compromised. 

    This article first appeared on TODAY.com. More from TODAY:

    This story uses functionality that may not work in our app. Click here to open the story in your web browser.

    ]]>
    Sun, Oct 01 2023 08:35:58 PM
    In Hollywood writers' battle against AI, humans win (for now) https://www.nbcwashington.com/news/national-international/in-hollywood-writers-battle-against-ai-humans-win-for-now/3432129/ 3432129 post https://media.nbcwashington.com/2023/09/WGA-STRIKERS.jpg?quality=85&strip=all&fit=300,169 After a 148-day strike, Hollywood screenwriters secured significant guardrails against the use of artificial intelligence in one of the first major labor battles over generative AI in the workplace.

    During the nearly five-month walkout, no issue resonated more than the use of AI in script writing. What was once a seemingly lesser demand of the Writers Guild of America became an existential rallying cry.

    The strike was also about streaming-era economics, writers room minimums and residuals — not exactly compelling picket-sign fodder. But the threat of AI vividly cast the writers’ plight as a human-versus-machine clash, with widespread implications for other industries facing a radically new kind of automation.

    In the coming weeks, WGA members will vote on whether to ratify a tentative agreement, which requires studios and production companies to disclose to writers if any material given to them has been generated by AI partially or in full. AI cannot be a credited writer. AI cannot write or rewrite “literary material.” AI-generated writing cannot be source material.

    “AI-generated material can’t be used to undermine a writer’s credit or separated rights,” the proposed contract reads.

    Many experts see the screenwriters’ deal as a forerunner for labor battles to come.

    “I hope it will be a model for a lot of other content-creation industries,” said Tom Davenport, a professor of information technology at Babson College and author of “All-in on AI: How Smart Companies Win Big with Artificial Intelligence.” “It pretty much insures that if you’re going to use AI, it’s going to be humans working alongside AI. That, to me, has always been the best way to use any form of AI.”

    The tentative agreement between the Writers Guild and the Alliance of Motion Picture and Television Producers, which negotiates on behalf of the studios, doesn’t prohibit all uses of artificial intelligence. Both sides have acknowledged it can be a worthwhile tool in many aspects of filmmaking, including script writing.

    The deal states that writers can use AI if the company consents. But a company cannot require a writer to use AI software.

    Language over AI became a sticking point in the writers’ negotiations, which dragged on last week in part due to the challenges of bargaining on such a fast-evolving technology.

    When the writers strike began on May 2, it was just five months after OpenAI released ChatGPT, the AI chatbot that can write essays, have sophisticated conversations and craft stories from a handful of prompts. Studios said it was it too early to tackle AI in these negotiations and preferred to wait until 2026.

    Ultimately, they hashed out terms while noting that the outlook is certain to change. Under the draft contract, “the parties acknowledge that the legal landscape around the use of (generative AI) is uncertain and rapidly developing.” The companies and the guild agreed to meet at least twice a year during the contract’s three-year term.

    At the same time, there are no prohibitions on studios using scripts they own to train AI systems. The WGA left those issues up to the legal system to parse. A clause notes that writers retain the right to assert that their work has been exploited in training AI software.

    That’s been an increasingly prominent concern in the literary world. Last week, 17 authors, including John Grisham, Jonathan Franzen and George R.R. Martin, filed a lawsuit against OpenAI alleging the “systematic theft on a massive scale” of their copyrighted books.

    The terms the WGA achieved will surely be closely watched by others — particularly the striking members of the actors union, SAG-AFTRA.

    “This is the first step on a long process of negotiating and working through what generative AI means for the creative industry — not just writers but visual artists, actors, you name it,” says David Gunkel, a professor of media studies at Northern Illinois University and author of “Person, Thing, Robot.”

    Actors, on strike since July 14, are likewise seeking better compensation from streaming. But they are also demanding safeguards against AI, which can potentially use a star’s likeness without his or her permission or replace background actors entirely.

    Attempts to adopt AI “as a normal operating procedure” are “literally dehumanizing the workforce,” actor Bryan Cranston said recently on a picket line. “It’s not good for society. It’s not good for our environment. It’s not good for working-class families.”

    In other developments, SAG-AFTRA members voted overwhelmingly Monday in favor of a strike authorization against video game companies. The use of AI in gaming is a particularly acute anxiety for voice-over actors.

    Some skeptics doubt whether the writers made significant headway on AI. Media mogul Barry Diller, chairman of the digital media company IAC, believes not enough was done.

    “They spent months trying to craft words to protect writers from AI, and they ended up with a paragraph that protected nothing from no one,” Diller told CNBC.

    Robert D. Atkinson, president of the tech policy think tank Information Technology & Innovation Foundation, said limiting AI is unproductive.

    “If we ban the use of tools to make organizations more productive, we are consigning ourselves to stagnation,” Atkinson write on X, formerly known as Twitter.

    What most observers agree on, though, is that this was just the first of many AI labor disputes. Gunkel expects to see both writers and studios continue to experiment with AI.

    “We’re so early into this that no one is able to anticipate everything that might come up with generative AI in the creative industries,” Gunkel said. “We’re going to see the need again and again to revisit a lot of these questions.”

    ]]>
    Wed, Sep 27 2023 06:03:45 PM
    WGA, studios reportedly discussing use of AI in content creation as new deal looms https://www.nbcwashington.com/entertainment/entertainment-news/wga-studios-reportedly-closing-in-on-deal-after-monthslong-strike/3429626/ 3429626 post https://media.nbcwashington.com/2023/07/GettyImages-1532552869.jpg?quality=85&strip=all&fit=300,169 Negotiators for the striking Writers Guild of America and studio representatives were planning to meet again Sunday to continue work “in the final phase” of contract talks to potentially end a monthslong strike that’s crippled the entertainment industry.

    According to the major industry trade publications, legal teams were said to be going over the fine points of complex issues such as residuals for streamed content and the use of artificial intelligence to create content.

    The sides were back at the bargaining table Saturday after talks during the previous three days failed to yield an agreement, although management insiders claimed progress was being made.

    The WGA and AMPTP issued a joint statement Saturday night announcing the sides will meet again Sunday.

    The so-called “Gang of Four” major studio bosses — Netflix’s Ted Sarandos, Disney’s Bob Iger, Universal’s Donna Langley and Warner Bros/Discovery’s David Zaslav, were present at the negotiations Friday for the third consecutive day.

    Those four were no longer in the Sherman Oaks negotiating room by Saturday afternoon, possibly signaling that the major issues had been resolved.

    Representatives from the WGA and AMPTP met Wednesday for the first time since mid-August, then met again Thursday.

    Writers, who went on strike May 2, were joined on the picket line in July by the SAG-AFTRA actors’ union. There have been no known contract talks between the studios and SAG-AFTRA since that strike began.

    Both unions are pushing for protections against the use of artificial intelligence and improvements in salary, particularly for successful streaming programs.

    With negotiations seemingly stalled earlier this month, the WGA negotiating team issued a statement suggesting that some traditional Hollywood studios should break ranks with the AMPTP and reach a deal directly with the writers’ union. The WGA suggested it has spoken with some studio executives who believe a deal could be quickly struck.

    “So, while the intransigence of the AMPTP structure is impeding progress, these behind-the-scenes conversations demonstrate there is a fair deal to be made that addresses our issues,” according to the WGA negotiating team. “… We have made it clear that we will negotiate with one or more of the major studios, outside the confines of the AMPTP, to establish the new WGA deal.

    “There is no requirement that the companies negotiate through the AMPTP. So, if the economic destabilization of their own companies isn’t enough to cause a studio or two or three to either assert their own self-interest inside the AMPTP, or to break away from the broken AMPTP model, perhaps Wall Street will finally make them do it.

    The AMPTP, however, issued a statement of its own saying all of its members are committed to working within the alliance to reach a deal for all studios.

    “The AMPTP member companies are aligned and are negotiating together to reach a resolution,” a statement from the alliance said. “Any suggestion to the contrary is false.

    “Every member company of the AMPTP wants a fair deal for writers and actors and an end to the strikes, which are affecting not only our writer and actor colleagues, but also thousands of others across the industry. That is why the AMPTP has repeatedly put forward offers that address major priorities of the WGA, including a last round of offers on Aug. 17 and 18.”

    ]]>
    Sun, Sep 24 2023 01:53:18 AM
    Uber Eats to begin accepting SNAP as part of new initiatives to help consumers save money https://www.nbcwashington.com/entertainment/the-scene/uber-eats-to-begin-accepting-food-stamps-as-part-of-new-initiatives-to-help-consumers-save-money/3428499/ 3428499 post https://media.nbcwashington.com/2023/09/GettyImages-1449032425.jpg?quality=85&strip=all&fit=300,200 Uber Eats has announced several new initiatives to help consumers save money on food and groceries as prices continue to rise across the country.

    The food delivery app is already used by millions of consumers to have food from restaurants and grocery orders delivered to their home or business. Now, the company says it is starting new programs to help people save time and money as they shop for food on the go.

    Here are all the ways Uber Eats will be helping users save money later this year and into 2024:

    SNAP/Food Stamps

    Food stamps issued by the U.S. Department of Agriculture, officially known as the Supplemental Nutrition Assistance Program (SNAP), will soon be accepted by Uber Eats.

    More than 41 million Americans receive SNAP benefits to help pay for fresh and nutritious groceries.

    Starting in 2024, Uber Eats will accept SNAP benefits as a form of payment, reducing barriers for people who live in food deserts or have transportation issues.

    Other food delivery apps, such as DoorDash and Instacart, already accept SNAP as payment for grocery deliveries.

    Healthcare Benefit Payments

    Uber Eats will also begin accepting Flex Cards, FSA Cards and other relevant waiver payments provided to Americans from Medicare Advantage and Managed Medicaid plans as part of its food delivery services.

    Artificial Intelligence

    Starting later this year, Uber Eats users will be able to use an AI chat feature to discover new food options, as well as deals and discounts.

    Consumers will even be able to make affordable meal plans using the new AI tool, the company said.

    Sales Aisle

    In another way to help save consumers time and money, Uber Eats will launch a Sales Aisle section of its app to put the best deals and brands in one place.

    The company described the new feature as something that “combines promos and deals into one easy to find space, saving you the hassle of long searches through the app.”

    ]]>
    Thu, Sep 21 2023 07:28:30 PM
    John Grisham, George R.R. Martin and more authors sue OpenAI for copyright infringement https://www.nbcwashington.com/news/national-international/john-grisham-george-r-r-martin-and-more-authors-sue-openai-for-copyright-infringement/3427654/ 3427654 post https://media.nbcwashington.com/2023/09/AUTHORS-SUE-AI.jpg?quality=85&strip=all&fit=300,169 John Grisham, Jodi Picoult and George R.R. Martin are among 17 authors suing OpenAI for “systematic theft on a mass scale,” the latest in a wave of legal action by writers concerned that artificial intelligence programs are using their copyrighted works without permission.

    In papers filed Tuesday in federal court in New York, the authors alleged “flagrant and harmful infringements of plaintiffs’ registered copyrights” and called the ChatGPT program a “massive commercial enterprise” that is reliant upon “systematic theft on a mass scale.”

    The suit was organized by the Authors Guild and also includes David Baldacci, Sylvia Day, Jonathan Franzen and Elin Hilderbrand among others.

    “It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the U.S.,” Authors Guild CEO Mary Rasenberger said in a statement. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

    The lawsuit cites specific ChatGPT searches for each author, such as one for Martin that alleges the program generated “an infringing, unauthorized, and detailed outline for a prequel” to “A Game of Thrones” that was titled “A Dawn of Direwolves” and used “the same characters from Martin’s existing books in the series “A Song of Ice and Fire.”

    In a statement Wednesday, an OpenAI spokesperson said that the company respects “the rights of writers and authors, and believe they should benefit from AI technology.

    “We’re having productive conversations with many creators around the world, including the Authors Guild, and have been working cooperatively to understand and discuss their concerns about AI. We’re optimistic we will continue to find mutually beneficial ways to work together to help people utilize new technology in a rich content ecosystem,” the statement reads.

    Earlier this month, a handful of authors that included Michael Chabon and David Henry Hwang sued OpenAI in San Francisco for “clear infringement of intellectual property.”

    In August, OpenAI asked a federal judge in California to dismiss two similar lawsuits, one involving comedian Sarah Silverman and another from author Paul Tremblay. In a court filing, OpenAI said the claims “misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

    Author objections to AI have helped lead Amazon.com, the country’s largest book retailer, to change its policies on e-books. The online giant is now asking writers who want to publish through its Kindle Direct Program to notify Amazon in advance that they are including AI-generated material. Amazon is also limiting authors to three new self-published books on Kindle Direct per day, an effort to restrict the proliferation of AI texts.

    ]]>
    Wed, Sep 20 2023 06:51:47 PM
    Amazon unveils ‘smarter and more conversational' Alexa amid AI race among tech companies https://www.nbcwashington.com/news/national-international/amazon-unveils-smarter-and-more-conversational-alexa-amid-ai-race-among-tech-companies/3427453/ 3427453 post https://media.nbcwashington.com/2023/09/GettyImages-1238256451.jpg?quality=85&strip=all&fit=300,200 Amazon has unveiled a slew of gadgets and an update to its popular voice assistant Alexa, infusing it with more generative AI features to better compete with other tech companies who’ve rolled out flashy chatbots.

    During a demonstration in Washington D.C. on Wednesday, Amazon’s devices chief Dave Limp said the latest language model will allow consumers to have more human-like conversations with a “smarter and more conversational” Alexa.

    The company showed different interactions through a pre-recorded video and a live demo where Alexa responds to prompts to write a poem, give ideas for a date night and provide a breakdown of a football game. Limp also demonstrated a capability where the voice assistant can prep a text message, though his exchange with Alexa included some awkward pauses where he had to repeat some prompts twice before getting an answer.

    The company says its also working on a “speech-to-speech” model that will, for example, allow Alexa to exhibit human-like attributes, such as laughter and phrases like “uh-huh” during conversations.

    Amazon holds the annual gadget event to exhibit new devices in front of journalists and industry insiders before they officially hit the market. Among other things, the tech giant also showcased a fee-based emergency service for Alexa that allows users to call for help without using the phone, new Echo smart speakers as well as Amazon Fire tablets for kids.

    In August, Amazon CEO Andy Jassy announced Limp would retire after almost 14 years with the company, where he’s overseen innovations in Kindle readers, Amazon’s Fire TV and Echo devices. Although the devices unit has rolled out a large number of gadgets over the years, not all of them have caught on. Think the Alexa-enabled microwave or the roaming Astro robot, which Amazon unveiled in 2021 at an introductory price of $1,000 but has had a limited rollout.

    The devices unit was hit by Amazon’s company-wide layoffs several months ago. The company’s hasn’t announced Limp’s replacement.

    Amazon is a leader in the U.S. smart speaker market, commanding nearly 64 million monthly users of its Echo devices, according to Insider Intelligence. But the market research company forecasts that the devices will lose some market share in the next few years as the number of smart speakers continues to grow. Consumers have also become more likely to use their smartphones to access voice assistants instead of smart speakers.

    For years, Amazon has been seeking to drive consumer purchases from its Echo devices, a dream that hasn’t been fully realized.

    Amazon said last year 50% of Alexa customers used their device to shop. Limp noted on Wednesday that more customers have been using Alexa to shop year-over-year. According to Adobe Analytics, consumers typically use their smart speakers to play music, check the weather and set alarms and reminders.

    ]]>
    Wed, Sep 20 2023 03:17:18 PM
    Which is better — ChatGPT or a travel agent? Here's our pick https://www.nbcwashington.com/news/business/money-report/which-is-better-chatgpt-or-a-travel-agent-heres-our-pick/3425210/ 3425210 post https://media.nbcwashington.com/2023/09/107301294-1694754315084-gettyimages-1195223774-1870047.jpeg?quality=85&strip=all&fit=300,210 Planning a holiday can be stressful — that’s where travel agents come in.

    But now, travelers have another option: chatbots like ChatGPT, Bard AI and Microsoft Bing. Simply input a prompt and watch the travel recommendations pour in. The best parts? It’s instantaneous and, for the most part, free.

    But which is better when it comes to planning vacations?

    Intrepid Travel, a small group travel agency, accepted CNBC Travel’s request to find out.

    CNBC asked both sides to plan a two-day trip for four friends, all in their mid-20s, to Melbourne, Australia.

    Here’s how they fared.

    Where to stay in Melbourne

    The ask: Recommend three places to stay in Melbourne that have a pool and gym, are near Swanston Street, and that are priced less than $500 a night.

    Right off the bat, there was a rather glaring error with ChatGPT: All three recommendations were no longer in service. If that wasn’t enough, some of the places lacked both a pool and a gym, and one was over the budget.

    Intrepid Travel, on the other hand, provided options that came with either a pool or a gym, or both. The company also recognized that those amenities were not necessities but additional benefits.

    The winner: Intrepid Travel

    Where to eat

    The ask: Provide dining options for breakfast, lunch, dinner and post-dinner drinks for two days.

    Again, ChatGPT struggled. The suggested restaurant on the first day, a place called Fatto Bar & Cantina, had been closed for years.

    Apart from that, a quick Google search of the other places showed that they were (thankfully) still in operation. Those were, to me, on the safer end, with suggested spots appearing on several “must-visit” restaurant lists for Melbourne.

    Conversely, I felt that Intrepid Travel suggested places that were more niche and representative of Melbourne’s unique culture. 

    It is worth noting that both Intrepid Travel and ChatGPT proposed breakfast at Hardware Société, a popular brunch spot with locations in Paris and Barcelona too.

    The winner: Intrepid Travel

    What to do

    The ask: Provide a two-day itinerary around Melbourne with a focus on art and cultural activities.

    Both Intrepid Travel and ChatGPT came back with reasonable options around the city. Multiples places were on both lists — Queen Victoria Market, Hosier Lane and National Gallery of Victoria — which point to the popularity of those spots.

    My favorite recommendation? Incube8r, a store with handmade gifts and art, as recommended by Intrepid Travel.

    The winner: Intrepid Travel (again)

    Finding a ‘hidden gem’

    The ask: Recommend one place that is not well known by travelers

    Intrepid Travel's hidden gem recommendation: Le Bar Europeen. It's been touted as Australia's smallest bar and barely fits four people.
    Reds | Room | Getty Images
    Intrepid Travel’s hidden gem recommendation: Le Bar Europeen. It’s been touted as Australia’s smallest bar and barely fits four people.

    Intrepid Travel recommended hidden speakeasy Le Bar Europeen for a nightcap, and the Yalinguth App walking tour as a daytime activity. I found both recommendations exciting and felt that they were lesser-known ways to explore the city.

    Between the two, I particularly enjoyed the Yalinguth App walking tour, which is an audio tour along Gertrude Street in Melbourne’s Fitzroy district. The app uses geolocated stories and sounds from Australia’s aboriginal community so listeners can understand a slice of Australia’s past as they make their way around one of Melbourne’s cultural hubs.

    On the other hand, ChatGPT interpreted the request as asking for a full day’s itinerary, recommending visits to Hardware Société, Rippon Lea House and Gardens, Queen Victoria Market, Melbourne Museum, Chin Chin and Eau De Vie.

    I don’t consider any of those “hidden gems” in Melbourne, as all are all rather popular locations for tourists to visit.

    The winner: Intrepid Travel

    Conclusion

    Ultimately, some of the teething problems I had with ChatGPT boiled down to the chatbot not being up-to-date — it currently only “knows” data up to 2021. 

    In ordinary circumstances, a two-year time lag doesn’t seem like much. After all, restaurants and hotels open and close all the time! That said, the initial two years of the Covid-19 pandemic caused many closures in the hospitality sector, making recommendations given prior to it unreliable at times.

    I also found browsing Intrepid’s itinerary more enjoyable as each recommendation came with a short write-up. The company also suggested specific activities and dishes to try at each location.

    On the other hand, ChatGPT was much more succinct in its recommendations. Though impersonal and utilitarian, it got the job done. However, I found myself less excited about my trip than when I read Intrepid Travel’s suggestions.

    Overall, I won’t discount the recommendations put forth by ChatGPT. It’s a quick and easy way to suss out the classic top spots to visit on your holiday. But if you want a more personalized itinerary that focuses more on local spots, sticking with travel companies is the way to go.

    ]]>
    Sun, Sep 17 2023 09:41:12 PM
    A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the right diagnosis https://www.nbcwashington.com/news/national-international/a-boy-saw-17-doctors-over-3-years-for-chronic-pain-chatgpt-found-the-right-diagnosis/3421991/ 3421991 post https://media.nbcwashington.com/2023/09/chat-gpt-diagnosis-jp-230911-f1cc83-e1694551314455.webp?fit=300,215&quality=85&strip=all During the COVID-19 lockdown, Courtney bought a bounce house for her two young children. Soon after, her son, Alex, then 4, began experiencing pain.

    “(Our nanny) started telling me, ‘I have to give him Motrin every day, or he has these gigantic meltdowns,’” Courtney, who asked not to use her last name to protect her family’s privacy, tells TODAY.com. “If he had Motrin, he was totally fine.”

    Then Alex began chewing things, so Courtney took him to the dentist. What followed was a three-year search for the cause of Alex’s increasing pain and eventually other symptoms.

    Chat GPT helped diagnosis her sonAlex saw 17 doctors over three years for his chronic pain, but none were able to find a diagnosis that explained all of his symptoms, his mom says. Courtesy Courtney

    The beginning of the end of the journey came earlier this year, when Courtney finally got some answers from an unlikely source, ChatGPT. The frustrated mom made an account and shared with the artificial intelligence platform everything she knew about her son’s symptoms and all the information she could gather from his MRIs.

    “We saw so many doctors. We ended up in the ER at one point. I kept pushing,” she says. “I really spent the night on the (computer) … going through all these things.”

    So, when ChatGPT suggested a diagnosis of tethered cord syndrome, “it made a lot of sense,” she recalls.

    Pain, grinding teeth, dragging leg

    When Alex began chewing on things, his parents wondered if his molars were coming in and causing pain. As it continued, they thought he had a cavity.

    “Our sweet personality — for the most part — (child) is dissolving into this tantrum-ing crazy person that didn’t exist the rest of the time,” Courtney recalls.

    The dentist “ruled everything out” but thought maybe Alex was grinding his teeth and believed an orthodontist specializing in airway obstruction could help. Airway obstructions impact a child’s sleep and could explain why he seemed so exhausted and moody, the dentist thought. The orthodontist found that Alex’s palate was too small for his mouth and teeth, which made it tougher for him to breathe at night. She placed an expander in Alex’s palate, and it seemed like things were improving.

    “Everything was better for a little bit,” Courtney says. “We thought we were in the home stretch.”

    But then she noticed Alex had stopped growing taller, so they visited the pediatrician, who thought the pandemic was negatively affecting his development. Courtney didn’t agree, but she still brought her son back in early 2021 for a checkup.

    “He’d grown a little bit,” she says.

    The pediatrician then referred Alex to physical therapy because he seemed to have some imbalances between his left and right sides.

    “He would lead with his right foot and just bring his left foot along for the ride,” Courtney says.

    But before starting physical therapy, Alex had already been experiencing severe headaches that were only getting worse. He visited a neurologist, who said Alex had migraines. The boy also struggled with exhaustion, so he was taken to an ear, nose and throat doctor to see if he was having sleep problems due to his sinus cavities or airway.

    No matter how many doctors the family saw, the specialists would only address their individual areas of expertise, Courtney says.

    “Nobody’s willing to solve for the greater problem,” she adds. “Nobody will even give you a clue about what the diagnosis could be.”

    Next, a physical therapist thought that Alex could have something called Chiari malformation, a congenital condition that causes abnormalities in the brain where the skull meets the spine, according to the American Association of Neurological Surgeons. Courtney began researching it, and they visited more doctors — a new pediatrician, a pediatric internist, an adult internist and a musculoskeletal doctor — but again reached a dead end.

    In total, they visited 17 different doctors over three years. But Alex still had no diagnosis that explained all his symptoms. An exhausted and frustrated Courtney signed up for ChatGPT and began entering his medical information, hoping to find a diagnosis.

    “I went line by line of everything that was in his (MRI notes) and plugged it into ChatGPT,” she says. “I put the note in there about … how he wouldn’t sit crisscross applesauce. To me, that was a huge trigger (that) a structural thing could be wrong.”

    She eventually found tethered cord syndrome and joined a Facebook group for families of children with it. Their stories sounded like Alex’s. She scheduled an appointment with a new neurosurgeon and told her she suspected Alex had tethered cord syndrome. The doctor looked at his MRI images and knew exactly what was wrong with Alex.

    “She said point blank, ‘Here’s occula spinal bifida, and here’s where the spine is tethered,” Courtney says.

    Tethered cord syndrome occurs when the tissue in the spinal cord forms attachments that limit movement of the spinal cord, causing it to stretch abnormally, according to the American Association of Neurological Surgeons.

    With tethered cord syndrome, “the spinal cord is stuck to something. It could be a tumor in the spinal canal. It could be a bump on a spike of bones. It could just be too much fat at the end of the spinal cord,” Dr. Holly Gilmer, a pediatric neurosurgeon at the Michigan Head & Spine Institute, who treated Alex, tells TODAY.com. “The abnormality can’t elongate … and it pulls.” 

    It can happen in patients with spina bifida, a birth defect where part of the spinal cord doesn’t develop fully and some of the spinal cord and nerves are exposed. In many children with spina bifida, there’s a visible opening in the child’s back. But the type Alex had is closed and considered “hidden,” according to the U.S. Centers for Disease Control and Prevention, which means it can be difficult to diagnose.

    “My son doesn’t have a hole. There’s almost what looks like a birthmark on the top of his buttocks, but nobody saw it,” Courtney says. “He has a crooked belly button.”

    Gilmer says doctors often find these conditions soon after birth, but in some cases, the marks — such as a dimple, a red spot or a tuft of hair — that indicate occult spina bifida can be missed. Then doctors rely on symptoms to make the diagnosis, which can include dragging a leg, pain, loss of bladder control, constipation, scoliosis, foot or leg abnormalities and a delay in hitting milestones, such as sitting up and walking.

    “In young children, it can be difficult to diagnose because they can’t speak,” Gilmer says, adding that many parents and children don’t realize that their symptoms indicate a problem. “If this is how they have always been, they think that’s normal.” 

    When Courtney finally had a diagnosis for Alex, she experienced “every emotion in the book, relief, validated, excitement for his future.”

    ChatGPT and medicine

    ChatGPT is a type of artificial intelligence program that responds based on input that a person enters into it, but it can’t have a conversation or provide answers in the way that many people might expect.

    That’s because ChatGPT works by “predicting the next word” in a sentence or series of words based on existing text data on the internet, Andrew Beam, Ph.D., assistant professor of epidemiology at Harvard who studies machine learning models and medicine, tells TODAY.com. “Anytime you ask a question of ChatGPT, it’s recalling from memory things it has read before and trying to predict the piece of text.”

    When using ChatGPT to make a diagnosis, a person might tell the program, “I have fever, chills and body aches,” and it fills in “influenza” as a possible diagnosis, Beam explains.

    “It’s going to do its best to give you a piece of text that looks like a … passage that it’s read,” he adds.

    There are both free and paid versions of ChatGPT, and the latter works much better than the free version, Beam says. But both seem to work better than the average symptom checker or Google as a diagnostic tool. “It’s a super high-powered medical search engine,” Beam says.

    It can be especially beneficial for patients with complicated conditions who are struggling to get a diagnosis, Beam says.

    These patients are “groping for information,” he adds. “I do think ChatGPT can be a good partner in that diagnostic odyssey. It has read literally the entire internet. It may not have the same blind spots as the human physician has.”

    But it’s not likely to replace a clinician’s expertise anytime soon, he says. For example, ChatGPT fabricates information sometimes when it can’t find the answer. Say you ask it for studies about influenza. The tool might respond with several titles that sound real, and the authors it lists may have even written about flu before — but the papers may not actually exist.

    This phenomenon is called “hallucination,” and “that gets really problematic when we start talking about medical applications because you don’t want it to just make things up,” Beam says.

    Diagnosis and treatment

    Alex is “happy go lucky” and loves playing with other children. He played baseball last year, but he quit because he was injured. Also, he had to give up hockey because wearing ice skates hurts his back and knees. He found a way to adapt, though.

    “He’s so freaking intelligent,” Courtney says. “He’ll climb up on a structure, stand on a chair, and starts being the coach. So, he keeps himself in the game.”

    After receiving the diagnosis, Alex underwent surgery to fix his tethered cord syndrome a few weeks ago.

    “We detach the cord from where it is stuck at the bottom of the tailbone essentially,” Gilmer says. “That releases the tension.” 

    Alex is still recovering. Gilmer says children bounce back from this surgery relatively quickly. Often the treatment, reduces any symptoms children were having, she says. Alex’s mom can see the joy on his face now.

    Courtney shared their story to help others facing similar struggles.

    “There’s nobody that connects the dots for you,” she says. “You have to be your kid’s advocate.”

    This article first appeared on TODAY.com. More from TODAY:

    ]]>
    Tue, Sep 12 2023 04:42:22 PM
    Musk, Zuckerberg, Gates: The titans of tech will talk AI at private Capitol summit https://www.nbcwashington.com/news/national-international/musk-zuckerberg-gates-the-titans-of-tech-will-talk-ai-at-private-capitol-summit/3420392/ 3420392 post https://media.nbcwashington.com/2023/09/image-5-1.png?fit=300,169&quality=85&strip=all Congress turns its attention to artificial intelligence this week as some of the most high-profile names in Big Tech descend on Capitol Hill for a first-of-its-kind gathering to brainstorm ways lawmakers can regulate the fast-moving technology that experts have warned could lead to human extinction.

    In a closed-door meeting Wednesday, all 100 senators will hear from Elon Musk, who bought Twitter and rebranded it X; Facebook co-founder Mark Zuckerberg; Microsoft co-founder Bill Gates; Sam Altman, the CEO of ChatGPT company OpenAI; and a host of other prominent tech leaders for what Senate Majority Leader Chuck Schumer, D-N.Y., has dubbed his inaugural AI Insight Forum. 

    The Senate brainstorming sessions will run through the fall. “Let’s see if there’s enough oxygen in the room for all of us,” quipped Sen. Tim Kaine, D-Va., who plans to attend.

    Sen. James Lankford, R-Okla., said with a smile that he’s anticipating “a lot of drama” Wednesday, perhaps a nod to the much-hyped cage match that never materialized this year between tech titans Musk and Zuckerberg. 

    With a who’s who of the tech world all in one building, the forum is sure to attract an army of staffers, lobbyists and reporters. Security is heightened anytime Musk, also the top executive at SpaceX and Tesla and the world’s richest person, enters the Capitol; security will be even tighter with a band of tech billionaires roaming the halls.

    Read the full story on NBCNews.com here.

    ]]>
    Mon, Sep 11 2023 05:19:10 AM
    Visual artists fight back against AI companies for repurposing their work https://www.nbcwashington.com/news/national-international/visual-artists-fight-back-against-ai-companies-for-repurposing-their-work/3414628/ 3414628 post https://media.nbcwashington.com/2023/08/AP23227787232928.jpg?quality=85&strip=all&fit=300,200 Kelly McKernan’s acrylic and watercolor paintings are bold and vibrant, often featuring feminine figures rendered in bright greens, blues, pinks and purples. The style, in the artist’s words, is “surreal, ethereal … dealing with discomfort in the human journey.”

    The word “human” has a special resonance for McKernan these days. Although it’s always been a challenge to eke out a living as a visual artist — and the pandemic made it worse — McKernan now sees an existential threat from a medium that’s decidedly not human: artificial intelligence.

    It’s been about a year since McKernan, who uses the pronoun they, began noticing online images eerily similar to their own distinctive style that were apparently generated by entering their name into an AI engine.

    The Nashville-based McKernan, 37, who creates both fine art and digital illustrations, soon learned that companies were feeding artwork into AI systems used to “train” image-generators — something that once sounded like a weird sci-fi movie but now threatens the livelihood of artists worldwide.

    “People were tagging me on Twitter, and I would respond, ’Hey, this makes me uncomfortable. I didn’t give my consent for my name or work to be used this way,’” the artist said in a recent interview, their bright blue-green hair mirroring their artwork. “I even reached out to some of these companies to say ‘Hey, little artist here, I know you’re not thinking of me at all, but it would be really cool if you didn’t use my work like this.’ And, crickets, absolutely nothing.”

    McKernan is now one of three artists who are seeking to protect their copyrights and careers by suing makers of AI tools that can generate new imagery on command.

    The case awaits a decision from a San Francisco federal judge, who has voiced some doubt about whether AI companies are infringing on copyrights when they analyze billions of images and spit out something different.

    “We’re David against Goliath here,” McKernan says. “At the end of the day, someone’s profiting from my work. I had rent due yesterday, and I’m $200 short. That’s how desperate things are right now. And it just doesn’t feel right.”

    The lawsuit may serve as an early bellwether of how hard it will be for all kinds of creators — Hollywood actors, novelists, musicians and computer programmers — to stop AI developers from profiting off what humans have made.

    The case was filed in January by McKernan and fellow artists Karla Ortiz and Sarah Andersen, on behalf of others like them, against Stability AI, the London-based maker of text-to-image generator Stable Diffusion. The complaint also named another popular image-generator, Midjourney, and the online gallery DeviantArt.

    The suit alleges that the AI image-generators violate the rights of millions of artists by ingesting huge troves of digital images and then producing derivative works that compete against the originals.

    The artists say they are not inherently opposed to AI, but they don’t want to be exploited by it. They are seeking class-action damages and a court order to stop companies from exploiting artistic works without consent.

    Stability AI declined to comment. In a court filing, the company said it creates “entirely new and unique images” using simple word prompts, and that its images don’t or rarely resemble the images in the training data.

    “Stability AI enables creation; it is not a copyright infringer,” it said.

    Midjourney and DeviantArt didn’t return emailed requests for comment.

    Much of the sudden proliferation of image-generators can be traced to a single, enormous research database, known as the Large-scale Artificial Intelligence Open Network, or LAION, run by a schoolteacher in Hamburg, Germany.

    The teacher, Christoph Schuhmann, said he has no regrets about the nonprofit project, which is not a defendant in the lawsuit and has largely escaped copyright challenges by creating an index of links to publicly accessible images without storing them. But the educator said he understands why artists are concerned.

    “In a few years, everyone can generate anything — video, images, text. Anything that you can describe, you can generate it in such a way that no human can tell the difference between AI-generated content and professional human-generated content,” Schuhmann said in an interview.

    The idea that such a development is inevitable — that it is, essentially, the future — was at the heart of a U.S. Senate hearing in July in which Ben Brooks, head of public policy for Stability AI, acknowledged that artists are not paid for their images.

    “There is no arrangement in place,” Brooks said, at which point Hawaii Democratic Sen. Mazie Hirono asked Ortiz whether she had ever been compensated by AI makers.

    “I have never been asked. I have never been credited. I have never been compensated one penny, and that’s for the use of almost the entirety of my work, both personal and commercial, senator,” she replied.

    You could hear the fury in the voice of Ortiz, also 37, of San Francisco, a concept artist and illustrator in the entertainment industry. Her work has been used in movies including “Guardians of the Galaxy Vol. 3,” “Loki,” “Rogue One: A Star Wars Story,” “Jurassic World” and “Doctor Strange.” In the latter, she was responsible for the design of Doctor Strange’s costume.

    “We’re kind of the blue-collar workers within the art world,” Ortiz said in an interview. “We provide visuals for movies or games. We’re the first people to take a stab at, what does a visual look like? And that provides a blueprint for the rest of the production.”

    But it’s easy to see how AI-generated images can compete, Ortiz says. And it’s not merely a hypothetical possibility. She said she has personally been part of several productions that have used AI imagery.

    “It’s overnight an almost billion-dollar industry. They just took our work, and suddenly we’re seeing our names being used thousands of times, even hundreds of thousands of times.”

    In at least a temporary win for human artists, another federal judge in August upheld a decision by the U.S. Copyright Office to deny someone’s attempt to copyright an AI-generated artwork.

    But Ortiz fears that artists will soon be deemed too expensive. Why, she asks, would employers pay artists’ salaries if they can buy “a subscription for a month for $30″ and generate anything?

    And if the technology is this good now, she adds, what will it be like in a few years?

    “My fear is that our industry will be diminished to such a point that very few of us can make a living,” Ortiz says, anticipating that artists will be tasked with simply editing AI-generated images, rather than creating. “The fun parts of my job, the things that make artists live and breathe — all of that is outsourced to a machine.”

    McKernan, too, fears what is yet to come: “Will I even have work a year from now?”

    For now, both artists are throwing themselves into the legal fight — a fight that centers on preserving what makes people human, says McKernan, whose Instagram profile reads: “Advocating for human artists.”

    “I mean, that’s what makes me want to be alive,” says the artist, referring to the process of artistic creation. The battle is worth fighting “because that’s what being human is to me.”

    O’Brien reported from Providence, Rhode Island.

    ]]>
    Thu, Aug 31 2023 12:13:27 PM
    Schools using AI to prevent gun violence https://www.nbcwashington.com/investigations/schools-using-ai-to-prevent-gun-violence/3412479/ 3412479 post https://media.nbcwashington.com/2023/08/Schools-use-technology-to-prevent-gun-violence.jpg?quality=85&strip=all&fit=300,169 As students return to the classroom around the D.C. region this year, safety is top of mind.

    “As a principal, the first thing you think of every day is keeping the students safe. And the parents, when they send the students to school, they trust that you and the staff will keep everyone safe every day,” said Bull Run Middle School Principal Matthew Phythian.

    He makes it a point to greet every one of his students each morning at the Prince William County, Virginia, school. But this year, something else will also be watching.

    “It definitely gives me a peace of mind,” he said.

    It’s a new weapons detection system called Evolv that uses sensors and artificial intelligence to detect potentially dangerous weapons coming through the front door. The school district said the safety screening technology is going in all Prince William County middle and high schools at a cost of $10.7 million over the next four years.

    “It’s looking at objects that may be threatening but ignoring other everyday metallic items. And so what it’s not picking up, is my keys, for example, or my cellphone,” said Jill Lemond, the director of education for Evolv.  

    More than 600 schools around the country already use Evolv, according to the company. Lemond told the News4 I-Team it’s able to scan close to 2,000 people an hour through a single lane. Unlike regular metal detectors, the AI can provide a specific location, noted with a red box.

    “Those individuals who do have an alert are going to go to a secondary search area where someone who’s been well trained is going to look in a very particular spot for that item,” Lemond said.

    “I was very surprised,” said eight grader Olivia McBride about her school deciding to go high tech. “It just adds a different level of security that can help teachers because they have so much going on.”

    But she said she welcomes anything that can make school a safer place, especially when it comes to gun violence.

    “I feel like a lot of people think, ‘Oh, that’ll never happen to us,’ and then one day it does and you just are so surprised by it,” she said.

    Principal Phythian said no guns have been found at Bull Run, but there have been times when knives were discovered on students in the past. PWCS told the I-Team 71 weapons were found in county schools in the 2021 to 2022 school year. That number dropped to 61 last year. Phythian hopes the new screening technology will be a deterrent to make any student think twice before making a bad decision.

    In Maryland, Charles County is the first school district in the state to use AI to detect guns and potential threats. Charles County Public Schools has seen an increase in weapons found over the past two school years, jumping 25% from 70 to 88.

    “We have to prepare for everything. We have to be right all the time,” said Jason Stoddard, the director of school safety and security for the county.

    The Omnilert Gun Detect software will monitor already-existing external cameras throughout campuses to identify not only weapons but physical behavior or movements consistent with possible violence.

    “It is constantly scanning our exterior cameras for the presence of people and then it looks for a weapon. And then it looks for what they’re doing,” said Stoddard.

    Once a potential threat is found, an alert is sent.

    “We get an automatic notification through an electronic means, through a text message or an app on our phone. And then we get to see the video and the pictures of what’s going on to determine whether we would call the police or not,” Stoddard said.

    The cost of the system is $207,000 according to the school district. A grant through the Maryland Center for School Safety’s Grant Funding Program will cover the first two years. CCPS also installed panic buttons in every main office this year that staff can use in case of any emergencies.

    And while Stoddard said this type of technology plays a part in keeping everyone safe, he still thinks building those open relationships between students and faculty is key.

    “Our kids can’t learn if they don’t feel safe. Our staff’s not going to teach if they don’t feel safe,” said Stoddard.

    In Prince William County, the new AI screening system is being rolled out to all middle and high schools with training underway.

    “I hope that it helps the students first and foremost feel comfortable, because we want them to come in these doors into a learning environment, into a social environment,” physical education teacher Amy Wetherbee said.

    Principal Phythian doesn’t think the new tool will take away from the positive mood around his hallways.

    “Our staff will still greet the students with smiles and high fives,” he said.

    And while AI can sometimes raise privacy concerns, he said he hasn’t heard any complaints from parents.

    “I think there’ll be an initial transition getting used to the system and have students familiar with what to do,” Phythian said.

    Reported by Tracee Wilkins, produced by Rick Yarborough, and shot and edited by Steve Jones.

    ]]>
    Mon, Aug 28 2023 05:29:22 PM
    29-year-old's self-driving car startup was born in a garage—now it has Bill Gates' attention and a $1 billion valuation https://www.nbcwashington.com/news/business/money-report/new-zealanders-self-driving-car-startup-was-born-in-a-garage-now-it-has-bill-gates-attention-and-a-1-billion-valuation/3406955/ 3406955 post https://media.nbcwashington.com/2023/08/107288134-1692282305591-Alex20Kendall20CEO201.jpg?quality=85&strip=all&fit=300,200 Since launching Wayve in 2017, CEO Alex Kendall has often felt like the self-driving car industry’s mostly ignored little brother.

    For years, Kendall and co-founder Amar Shah pitched their London-based autonomous driving software company as a “contrarian” alternative to companies like Alphabet and Tesla. Wayve focused entirely on artificial intelligence, while most of the industry used devices like cameras, radar and laser-powered lidar sensors.

    Then, AI hype exploded, bringing Wayve along with it. The company impressed Bill Gates during a London “test ride,” partnered with Microsoft on AI development, and landed more than $200 million in new funding from investors including Microsoft and Virgin. Its valuation is likely above $1 billion, CNBC reported last year. (Wayve declined to confirm the figure.)

    Now, Kendall feels less like an outsider, and Wayve’s future looks a bit more mainstream, he says.

    “For the last five years, we’ve been pursuing this approach and it’s been met with skepticism, and it has been a contrarian way of thinking about the problem [of autonomous driving],” Kendall, 29, tells CNBC Make It. “It’s almost like everything has changed this year. End-to-end deep learning and AI is not some crazy, far-out technology.”

    Of course, that means investors, competitors and a curious public are now watching Wayve more closely. And each obstacle — including some big ones, when it comes to putting self-driving cars on public roads — gets magnified.

    ‘A completely new way of thinking’

    Kendall was raised on New Zealand’s mountainous South Island. As a student, he built one of his country’s first homemade drones, scraped together from “a bunch of things you could get from the local electronics shop,” he says.

    In 2017, Kendall got a Ph.D. in deep learning and computer vision at the University of Cambridge. While there, he helped develop a deep learning algorithm for a computer vision concept called “semantic segmentation.”

    It was, essentially, groundwork for Wayve: Semantic segmentation helps machines identify and label each tiny pixel in an image in real time. When applied to self-driving technology, it can help a car process its environment, from following the path of a road to identifying other vehicles nearby.

    A delivery van outfitted with Wayve's autonomous driving software is part of the fleet of vehicles making grocery deliveries across London.
    Source: Wayve
    A delivery van outfitted with Wayve’s autonomous driving software is part of the fleet of vehicles making grocery deliveries across London.

    The approach, called AV2.0, differs from most of the self-driving car industry — which uses some combination of AI systems, cameras, radar and lidar sensors to create 3D maps of driving environments, and then plan out autonomous routes.

    Wayve’s goal is to build an AI system advanced enough to not need a 3D map at all. If successful, it could make human-like decisions in real time without a pre-planned route, and lower the cost of the cars by reducing the amount of high-end equipment required. Tesla’s current full self-driving package, for example, costs an additional $15,000.

    “The implications [are] enormous: It lets you build vehicles that are affordable, that don’t have hundreds of thousands of dollars of sensing and [computing power] on them,” Kendall says.

    The world’s first successful test of its kind

    Shah also has a Ph.D. from Cambridge, in machine learning. When he and Kendall launched Wayve, “many of the big technology giants had just put billions of dollars of funding into building autonomous vehicles,” Kendall says.

    The pair rode the coattails of that wave to land roughly $3 million in venture capital money, renting a residential home near the university to serve as their headquarters.

    “The small bedroom was our server room, the large bedroom was our boardroom,” Kendall says. “And we all, sort of, worked and lived and ate together, and [we] prototyped the first vehicle in the garage.”

    At first, Shah was CEO and Kendall was CTO. They built a prototype in the garage in less than six months, outfitting an electric Jaguar I-PACE SUV with autonomous tech for a test drive “around the block” Kendall says.

    It was the world’s first successful test of an AI-powered car driving an unmapped route using end-to-end deep learning tech, he adds: “We threw one of the biggest parties I’ve ever had in my life in the house that day.”

    In 2019, Wayve landed $20 million in funding from a group led by the venture firm Eclipse. The company moved into an office in London, and started hiring more aggressively. Shah departed in 2020, and Kendall took over as CEO.

    Earlier this year, the company kicked off an autonomous delivery program — including human safety drivers onboard — with Asda one of the U.K.’s largest grocery companies. The yearlong trial will reach more than 170,000 residents in different parts of the city, Kendall says.

    “Every day, we do different routes we’ve never done before,” he says. “And that kind of thing is only possible with AV2.0 technology.”

    Technological challenges and well-funded rivals

    It all sounds rosy, but significant challenges lie ahead.

    The next step, Kendall says: Continue improving the AI software until the vehicles don’t need safety drivers, which is easier said than done. The company may also license its technology to auto manufacturers, ride-hailing companies, public transit agencies and more. Kendall declined to put a timeline on either goal.

    Then, there’s the competition. Multibillion-dollar players like Tesla, Alphabet and General Motors have a huge head start: GM’s Cruise and Alphabet’s Waymo already operate driverless robotaxi services in San Francisco, with mixed results.

    Wayve also has multiple well-funded AV2.0 rivals, like Silicon Valley’s Ghost Autotomy and Toronto-based Waabi.

    When Wayve first launched, Kendall was “blissfully naïve and optimistic” regarding the challenges the company faced, he says. Now, he considers them a motivating factor.

    “The harder the problem, the more exciting it became for me,” says Kendall. “And the fact that we were taking on the biggest and most inspiring companies in the world … makes it extraordinary.”

    DON’T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!

    Get CNBC’s free Warren Buffett Guide to Investing, which distills the billionaire’s No. 1 best piece of advice for regular investors, do’s and don’ts, and three key investing principles into a clear and simple guidebook.

    ]]>
    Fri, Aug 18 2023 09:28:37 AM
    Google reportedly building A.I. that offers life advice https://www.nbcwashington.com/news/business/money-report/google-reportedly-building-a-i-that-offers-life-advice/3405411/ 3405411 post https://media.nbcwashington.com/2023/08/107287396-1692191964459-gettyimages-1253650284-GOOGLE_DEVELOPERS-1.jpeg?quality=85&strip=all&fit=300,200
  • One of Google’s AI units, DeepMind, is using generative AI to develop at least 21 different tools for life advice, planning and tutoring, The New York Times reported Wednesday.
  • Google has reportedly contracted with Scale AI, the $7.3 billion startup focused on training and validating AI software, to test the tools.
  • Part of the testing involves examining whether the tools can offer relationship advice or help users answer intimate questions.
  • One of Google’s AI units is using generative AI to develop at least 21 different tools for life advice, planning and tutoring, The New York Times reported Wednesday.

    Google’s DeepMind has become the “nimble, fast-paced” standard-bearer for the company’s AI efforts, as CNBC previously reported, and is behind the development of the tools, the Times reported.

    News of the tool’s development comes after Google’s own AI safety experts had reportedly presented a slide deck to executives in December that said users taking life advice from AI tools could experience “diminished health and well-being” and a “loss of agency,” per the Times.

    Google has reportedly contracted with Scale AI, the $7.3 billion startup focused on training and validating AI software, to test the tools. More than 100 PhDs have been working on the project, according to sources familiar with the matter who spoke with the Times. Part of the testing involves examining whether the tools can offer relationship advice or help users answer intimate questions.

    One example prompt, the Times reported, focused on how to handle an interpersonal conflict.

     “I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can’t afford the flight or hotel right now. How do I tell her that I won’t be able to come?” the prompt reportedly said.

    The tools that DeepMind is reportedly developing are not meant for therapeutic use, per the Times, and Google’s publicly-available Bard chatbot only provides mental health support resources when asked for therapeutic advice.

    Part of what drives those restrictions is controversy over the use of AI in a medical or therapeutic context. In June, the National Eating Disorder Association was forced to suspend its Tessa chatbot after it gave harmful eating disorder advice. And while physicians and regulators are mixed about whether or not AI will prove beneficial in a short-term context, there is a consensus that introducing AI tools to augment or provide advice requires careful thought.

    “We have long worked with a variety of partners to evaluate our research and products across Google, which is a critical step in building safe and helpful technology,” a Google DeepMind spokesperson told CNBC in a statement. “At any time there are many such evaluations ongoing. Isolated samples of evaluation data are not representative of our product road map.”

    Read more in The New York Times.

    ]]>
    Wed, Aug 16 2023 09:45:57 AM
    Chances are you haven't used A.I. to plan a vacation. That's about to change https://www.nbcwashington.com/news/business/money-report/chances-are-you-havent-used-a-i-to-plan-a-vacation-thats-about-to-change/3403389/ 3403389 post https://media.nbcwashington.com/2023/08/107285016-1691731025239-gettyimages-1426172080-businessman-decision-making.jpeg?quality=85&strip=all&fit=300,200 According to a global survey of more than 5,700 travelers commissioned by Expedia Group, the average traveler spends more than five hours researching a trip and reviews 141 pages of content — for Americans, it’s a whopping 277 pages.

    Enter generative artificial intelligence — a technology set to simplify that process, and allow companies to better tailor recommendations to travelers’ specific interests.

    What could that look like? The hope is that AI will not only plan itineraries, but communicate with hotels, draft travel budgets, even function as a personal travel assistant — and in the process fundamentally alter the way companies approach travelers.

    A typical home search on Airbnb, for example, produces results that don’t take past searches into account. You may have a decade of booking upscale, contemporary homes under your belt, but you’ll likely still be offered rustic, salt-of-the-earth rentals if they match the filters you’ve set.

    But that could soon change.

    During an earnings call in May, CEO Brian Chesky discussed how AI could alter Airbnb’s approach. He said: “Instead of asking you questions like: ‘Where are you going, and when are you going?’ I want us to build a robust profile about you, learn more about you and ask you two bigger and more fundamental questions: Who are you, and what do you want?”

    While AI that provides the ever-elusive goal of “personalization at scale” isn’t here yet, it’s the ability to search massive amounts of data, respond to questions asked using natural language and “remember” past questions to build on a conversation — the way humans do — that has the travel industry (and many others) sold.

    Travel companies using A.I.

    In a survey conducted in April by the market research firm National Research Group, 61% of respondents said they’re open to using conversational AI to plan trips — but only 6% said they actually had.

    Furthermore, more than half of respondents (51%) said that they didn’t trust the tech to protect their personal data, while 33% said they feared it may provide inaccurate results.

    Yet while travelers are still debating the safety and merits of using AI for trip planning, many major travel companies are already diving headfirst into the technology.

    Just look at the names on this list.

    • In February, the Singapore-based travel company Trip.com launched TripGen, an in-app chatbot powered by OpenAI, the maker of ChatGPT.
    • In March, Expedia and Kayak were among the first batch of plugins rolled out by ChatGPT.
    • In April, Expedia announced a beta launch of a AI chatbot from ChatGPT.
    • In May, the Europe-based travel booking company eDreams Odigeo joined Google Cloud’s AI “Trusted Testers Program,” and Airbnb announced plans to build GPT-4, OpenAI’s newest large language model, into its interface.

    A summer explosion of travel A.I.

    Then the summer of 2023 saw a burst of AI travel tech announcements.

    In June:

    • Amazon Web Services announced an investment of $100 million into a program to help companies use generative AI, with RyanAir and Lonely Planet as two of the first four companies involved.
    • Booking.com rolled out an in-app “Trip Planner” AI chatbot to select U.S. members of its Genius loyalty program.
    • Priceline launched a platform called Trip Intelligence, led by a Google-backed generative AI chatbot named “Penny.”
    Source: HomeToGo
    HomeToGo’s new “AI Mode” allows travelers to find vacation rental homes using natural language requests.

    In July:

    • Tripadvisor launched a web-based, AI-powered travel itinerary maker called Trips.
    • Trip.com released an updated chatbot called TripGenie, which responds to text and voice requests, shows images and maps, and provides links for bookings.
    • The holiday home rental company HomeToGo beta launched an in-app AI search function called “AI Mode” for users in the United States and United Kingdom.

    Now, more travel companies have ChatGPT plugins, including GetYourGuide, Klook, Turo and Etihad Airways. And a slew of AI-powered trip planners — from Roam Around (for general travel), AdventureGenie (for recreational vehicles), Curiosio (for road trips) — added more options to the growing AI travel planning market.  

    Beyond travel planning

    Travel planning is the most visible use of AI in the travel industry right now, but companies are already planning new features.

    Trip.com’s Senior Product Director Amy Wei said the company is considering developing a virtual travel guide for its latest AI product, TripGenie.

    “It can help provide information, such as an introduction to historical buildings and objects in a museum,” she told CNBC. “The vision is to create a digital travel companion that can understand and converse with the traveler and provide assistance at every step of the journey.”

    The travel news site Skift points out AI may be used to predict flight delays and help travel companies respond to negative online reviews.

    The company estimates chatbots could bring $1.9 billion in value to the travel industry — by allowing companies to operate with leaner customer service staff, freeing up time for humans to focus on complex issues. Chatbots needn’t be hired or trained, can speak multiple languages, and “have no learning curve,” as Skift points out in a report titled “Generative AI’s Impact on Travel.”

    Overall, Skift’s report predicts generative AI could be a $28.5 billion opportunity for the travel industry, an estimate that if the tools are used to “their full potential … will look conservative in hindsight.”

    ]]>
    Sun, Aug 13 2023 06:55:59 PM
    Federal regulators take first step toward regulating use of artificial intelligence in campaign ads https://www.nbcwashington.com/news/national-international/federal-regulators-take-first-step-toward-regulating-use-of-artificial-intelligence-in-campaign-ads/3402110/ 3402110 post https://media.nbcwashington.com/2023/08/AP23222540945939.jpg?quality=85&strip=all&fit=300,200 The Federal Election Commission has begun a process to potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election, a move advocates say would safeguard voters against a particularly insidious form of election disinformation.

    The FEC’s unanimous procedural vote on Thursday advances a petition asking it to regulate ads that use artificial intelligence to misrepresent political opponents as saying or doing something they didn’t — a stark issue that is already being highlighted in the current 2024 GOP presidential primary.

    Though the circulation of convincing fake images, videos or audio clips is not new, innovative generative AI tools are making them cheaper, easier to use, and more likely to manipulate public perception. As a result, some presidential campaigns in the 2024 race — including that of Florida GOP Gov. Ron DeSantis — already are using them to persuade voters.

    The Republican National Committee in April released an entirely AI-generated ad meant to show the future of the United States if President Joe Biden is re-elected. It employed fake but realistic, photos showing boarded up storefronts, armored military patrols in the streets, and waves of immigrants creating panic.

    In June, DeSantis’ campaign shared an attack ad against his GOP primary opponent Donald Trump that used AI-generated images of the former president hugging infectious disease expert Dr. Anthony Fauci.

    SOS America PAC, which supports Miami Mayor Francis Suarez, a Republican, also has experimented with generative AI, using a tool called VideoAsk to create an AI chatbot in his likeness.

    Thursday’s FEC meeting comes after the advocacy group Public Citizen asked the agency to clarify that an existing federal law against “fraudulent misrepresentation” in campaign communications applies to AI-generated deepfakes.

    The panel’s vote shows the agency’s intent to consider the question, but it will not decide whether to actually develop rules governing the ads until after a 60-day public comment window, which is likely to begin next week.

    In June, the FEC deadlocked on an earlier petition from the group, with some commissioners expressing skepticism that they had the authority to regulate AI ads. Public Citizen came back with a new petition identifying the fraudulent misrepresentation law and explaining it thought the FEC did have jurisdiction.

    A group of 50 Democratic lawmakers led by House Rep. Adam Schiff also wrote a letter to the FEC urging the agency to advance the petition, saying, “Quickly evolving AI technology makes it increasingly difficult for voters to accurately identify fraudulent video and audio material, which is increasingly troubling in the context of campaign advertisements.”

    Republican Commissioner Allen Dickerson said in Thursday’s meeting he remained unconvinced that the agency had the authority to regulate deepfake ads.

    “I’ll note that there’s absolutely nothing special about deepfakes or generative AI, the buzzwords of the day, in the context of this petition,” he said, adding that if the FEC had this authority, it would mean it also could punish other kinds of doctored media or lies in campaign ads.

    Dickerson argued the law doesn’t go that far, but noted the FEC has unanimously asked Congress for more authority. He also raised concerns the move would wrongly chill expression that’s protected under the First Amendment.

    Public Citizen President Robert Weissman disputed Dickerson’s points, arguing in an interview Thursday that deepfakes are different from other false statements or media because they fraudulently claim to speak on a candidate’s behalf in a way that’s convincing to the viewer.

    “The deepfake has an ability to fool the voter into believing that they are themselves seeing a person say or do something they didn’t say,” he said. “It’s a technological leap from prior existing tools.”

    Weissman said acknowledging deepfakes are fraud solves Dickerson’s First Amendment concerns too — while false speech is protected, fraud is not.

    Lisa Gilbert, Public Citizen’s executive vice president, said under its proposal, candidates would also have the option to prominently disclose the use of artificial intelligence to misrepresent an opponent, rather than avoid the technology altogether.

    She argued action is needed because if a deepfake misleadingly impugning a candidate circulates without a disclaimer and doesn’t get publicly debunked, it could unfairly sway an election.

    For instance, the RNC disclosed the use of AI in its ad, but in small print that many viewers missed. Gilbert said the FEC could set guidelines on where, how and for how long campaigns and parties need to display these disclaimers.

    Even if the FEC decides to ban AI deepfakes in campaign ads, it wouldn’t cover all the threats they pose to elections.

    For example, the law on fraudulent misrepresentation wouldn’t enable the FEC to require outside groups, like PACs, to disclose when they imitate a candidate using artificial intelligence technology, Gilbert said.

    That means it wouldn’t cover an ad recently released by Never Back Down, a super PAC supporting DeSantis, that used an AI voice cloning tool to imitate Trump’s voice, making it seem like he narrated a social media post.

    It also wouldn’t stop individual social media users from creating and disseminating misleading content — as they long have — with both AI-generated falsehoods and other misrepresented media, often referred to as “cheap fakes.”

    Congress, however, could pass legislation creating guardrails for AI-generated deceptive content, and lawmakers, including Senate Majority Leader Chuck Schumer, have expressed intent to do so.

    Several states also have discussed or passed legislation related to deepfake technology.

    Daniel Weiner, director of the Elections and Government Program at the Brennan Center for Justice, said misinformation about elections being fraudulently stolen is already a “potent force in American politics.”

    More sophisticated AI, he said, threatens to worsen that problem.

    “To what degree? You know, I think we’re still assessing,” he said. “But do I worry about it? Absolutely.”

    ]]>
    Thu, Aug 10 2023 05:32:59 PM
    Will AI replace your job? New study reveals the professions most at-risk by 2030 https://www.nbcwashington.com/news/national-international/will-ai-replace-your-job-new-study-reveals-the-professions-most-at-risk-by-2030/3400347/ 3400347 post https://media.nbcwashington.com/2023/08/GettyImages-1442739535.jpg?quality=85&strip=all&fit=300,199

    What to Know

    • The U.S. labor market saw 8.6 occupational shifts with most people departing food services, in-person sales and office support positions, according to McKinsey Global Institute.
    • The findings state health, STEM, transportation, warehousing, business and legal professionals are projected to be growing under AI, while office support, customer service, sales, production work and food services are the worst impacted by AI acceleration.
    • About a fifth of U.S. workers are considered to have “high exposure” to AI, based on Pew Research Center data.

    Generative artificial intelligence (AI) is revolutionizing the U.S. labor market with advanced language capabilities and automation to enhance work options, but a couple of recent studies have found certain trends in the workplace that could shape the future of work in America.

    During the COVID pandemic, from 2019 to 2022, the labor market saw 8.6 occupational shifts with most people leaving food services, in-person sales and office support for other occupations, according to a new report by McKinsey Global Institute.

    The study suggests that positions that declined and flourished during the pandemic will keep that trend moving forward. The data expects an additional 12 million occupational shifts may be possible in the next seven years.

    Health, STEM, transportation, warehousing, business and legal professionals are projected to be growing under AI, while office support, customer service, sales, production work and food services are the worst impacted by AI acceleration, based on the research.

    Jobs remaining strong with a slower growth trajectory are creatives, art management, property maintenance, education, builders, community service, agriculture and mechanics.

    “It’s definitely a very powerful tool. Not sure how it’s going to affect the future, but definitely something to keep in mind,” Martha Yin, an investment banker in New York City, said to NBC New York.

    The survey found that workers are willing to pivot career paths, while tighter labor markets encourage companies to hire broadly. Position shifts in food and customer services accounted for 2.5 million changes in occupation.

    Fast food counter workers, cooks, waitstaff, retail sales, cashiers and hairstylists are just a few of the most common jobs that people decided to leave to pursue something else.

    “No, I don’t think AI is going to be that intense. Before it [AI] takes over police officers, I think that’s going to take a lot more time because I think that’s a little too crazy,” said Jonathon Cruz, a New Jersey state trooper.

    AI tools can identify data patterns, write code, design and strategize with or without human help. With this technology, about 30% of hours currently worked could be automated by 2030, based on the data.

    The research stated workers earning less than $40,000 per year are up to 14 times more likely to change occupations by the end of the decade than higher-paid earners.

    While the McKinsey Global Institute projects certain occupational shifts due to AI, another recent analysis shows U.S. workers are hopeful concerning the AI impacts.

    About a fifth of U.S. workers are considered to have “high exposure” to AI, particularly workers who identify as women, Asian, college-educated and high-paid workers, according to the Pew Research Center.

    The top industries with the most exposure to AI are science, technology, finance, insurance, real estate and public administration. The industries with the least exposure are managerial, administrative and food services, based on Pew Research.

    The Pew survey showed that workers more likely to see AI exposure do not necessarily feel their jobs are at risk. About one-in-four workers in professional, scientific and technical services believe AI will help more than hurt them, with about 20% of workers in government, public administration and military polling the same.

    Yin and fellow investment banker, Niko Molina, both shared with NBC New York that they do not feel threatened by AI in their employment, especially as banking relies on building client relationships.

    In contrast, four-in-ten workers in hospitality, services and arts are not sure about the influence of AI on their jobs.

    “I think it [AI] can change the future, but it could also have a negative impact on the public,” William Lee, a sneaker business owner, told News 4.

    ]]>
    Tue, Aug 08 2023 10:48:01 AM
    Dungeons & Dragons tells illustrators to stop using AI to generate artwork for fantasy franchise https://www.nbcwashington.com/entertainment/entertainment-news/dungeons-dragons-tells-illustrators-to-stop-using-ai-to-generate-artwork-for-fantasy-franchise/3399459/ 3399459 post https://media.nbcwashington.com/2020/10/IMG_8934.jpg?quality=85&strip=all&fit=300,225 The Dungeons & Dragons role-playing game franchise says it won’t allow artists to use artificial intelligence technology to draw its cast of sorcerers, druids and other characters and scenery.

    D&D art is supposed to be fanciful. But at least one ax-wielding giant seemed too weird for some fans, leading them to take to social media to question if it was human-made.

    Hasbro-owned D&D Beyond, which makes online tools and other companion content for the franchise, said it didn’t know until Saturday that an illustrator it has worked with for nearly a decade used AI to create commissioned artwork for an upcoming book. The franchise, run by the Hasbro subsidiary Wizards of the Coast, said in a statement that it has talked to that artist and is clarifying its rules.

    “He will not use AI for Wizards’ work moving forward,” said a post from D&D Beyond’s account on X, formerly Twitter. “We are revising our process and updating our artist guidelines to make clear that artists must refrain from using AI art generation as part of their art creation process for developing D&D.”

    Today’s AI-generated art often shows telltale glitches, such as distorted limbs, which is what caught the eye of skeptical D&D fans.

    Hasbro and Wizards of the Coast didn’t respond to requests for further comment Sunday. Hasbro bought D&D Beyond for $146.3 million last year. The Rhode Island-based toy giant has owned Wizards of the Coast for more than two decades.

    The art in question is in a soon-to-be-released hardcover book of monster descriptions and lore called “Bigby Presents: Glory of the Giants.” The digital and physical version of the package is selling for $59.95 on the D&D website and due for an Aug. 15 release.

    The use of AI tools to assist in creative work has raised copyright and labor concerns in a number of industries, helping to fuel the Hollywood strike, causing the music industry’s Recording Academy to revise its Grammy Awards protocols and leading some visual artists to sue AI companies for ingesting their work without their consent to build image-generators that anyone can use.

    Hasbro rival Mattel used AI-generated images to help come up with ideas for new Hot Wheels toy cars, though it hasn’t said if that was more than an experiment.

    ]]>
    Mon, Aug 07 2023 03:12:41 PM
    That sports broadcaster you hear could be AI https://www.nbcwashington.com/news/sports/that-sports-broadcaster-you-hear-could-be-ai/3394265/ 3394265 post https://media.nbcwashington.com/2023/07/GettyImages-1247553442.jpg?quality=85&strip=all&fit=300,200 Artificial intelligence commentators are edging into roles in sports broadcasting, with major competitions such as the Masters golf and Wimbledon tennis championships using the tech to automatically narrate certain highlight videos posted on the tournaments’ websites and apps.

    In June, Eurovision Sport, a division of the European Broadcasting Union (EBU), used an AI voice to provide recaps in between live commentary at the European Athletics Team Championships in Poland. And next month, the U.S. Open will also use the tech, according to Noah Syken, IBM’s vice president of sports and entertainment partnerships. IBM collaborated with the Masters and Wimbledon to create AI commentary.

    The developing use of AI in sports broadcasting events is just one of the recent examples of the tech quickly being adopted for tasks that could be performed by humans, stoking anxieties around job security and raising questions around AI performance compared to human performance. 

    Read the full story on NBCNews.com here.

    ]]>
    Sat, Jul 29 2023 06:05:13 PM
    How artificial intelligence is helping hire, promote and train workers https://www.nbcwashington.com/news/business/money-report/how-artificial-intelligence-is-helping-hire-promote-and-train-workers/3394195/ 3394195 post https://media.nbcwashington.com/2023/07/107274356-1689869031421-gettyimages-1204762097-p_ab_technology153.jpeg?quality=85&strip=all&fit=300,172
  • AI can analyze massive amounts of data and provide useful feedback to HR leaders.
  • HR leaders can use AI in recruiting, onboarding, and learning and development functions.
  • AI can alleviate repetitive tasks in HR departments, giving managers more time for big picture initiatives.
  • As artificial intelligence becomes more prevalent in business, HR departments are at the forefront of capitalizing on its potential.

    A majority of HR leaders are already using AI for a variety of functions, according to a 2022 survey from Eightfold AI. Another 92% of survey respondents expected to enlarge their reliance on AI capabilities for at least one HR function in the next 12 to 18 months.

    The functions with which AI can assist HR teams run the gamut, including managing employee records, processing payroll, administering benefits, and composing emails to address repetitive inquiries. AI’s powerful ability to analyze vast amounts of data and provide valuable feedback almost instantaneously can increase the efficiency and productivity of HR departments.

    “If used in the right way, [AI] should make the day more fulfilling. [HR teams can] really spend more time on the things that are essentially human, as opposed to things that can be very much augmented or done by AI,” said Benjamin Sesser, CEO of BrightHire, an HR technology company.

    For example, analyzing the open-ended text comments in employee surveys can be a time-consuming and even challenging process. With AI, responses can be distilled quickly and sharply, helping HR teams get a better grasp of the supplied answers. Sesser says that answering policy questions “and triaging the ones that actually need somebody to provide more context” can also free up valuable time.

    On the recruiting side, Sesser says that AI can take notes during an interview, relieving the HR interviewer of the drudgery and allowing them to engage more directly with the candidate in front of them. Besides providing a written transcript of the session, AI can also suggest interview questions to be asked to ensure that all the pertinent items are covered. 

    “AI is going to be a transformational technology in the future of work. It’s really an important time right now for people to understand the change, so that they can be on the floor of both, crafting their organization’s success by applying it, but also their personal success,” Sesser said.

    More time for big picture thinking

    “HR departments have been very compliance focused. AI can remove those operational things that you have to be doing in HR, so that you could focus on more people-first or strategy components of your business,” said Jessica Dennis, the lead writer on HR Tech at TechnologyAdvice, a B2B media company.

    Among the key areas where AI can take on more responsibility, Dennis says, are recruiting and onboarding, administrative compliance, performance management, and learning and development.

    Besides helping with preparing interview questions, Dennis says that AI can actively look for new recruits or passive candidates. For example, AI can “source” passive candidates by reaching out to them with automated emails sent to job boards or LinkedIn.

    In addition to participating in the recruitment process of new hires, AI can also help staffers find new positions within their company. Dennis says that employees can state the skills they have and ask the AI interface about other positions in the company that would fit them.

    HR leaders can also use AI to assist team members who may want to change their position to something else but don’t know how. AI can identify areas where skills need to be sharpened or new ones added to meet the employee’s objectives.

    In the onboarding process, AI can reach out to new hires and walk them through the paperwork process to get into the company’s system. New hires would receive AI-generated emails about the paperwork, even helping them complete it. AI could also answer frequently asked questions, drawing information from the company handbook or policies.

    More exciting, Dennis says, is that an AI assistant can help new employees choose the best company benefits plan to enroll in. For example, an employee might answer questions about, say, their financial goals, personal responsibilities, family situation, and long-range plans. With that data, AI could suggest the company plan that best suits their situation — just as a human HR benefits manager would.

    Learning and development initiatives are key to retaining employees, Dennis says. With AI, HR leaders can submit a prompt to have AI write and develop a course to fulfill it, drawn from previous courses in its database, from public sources, or from the company’s own internal documentation. “It’s going to come up with that course probably in lightning quick time,” Dennis said.

    Creating and maintaining a corporate culture that appeals to employees can improve retention rates. By handling the never-ending administrative paperwork to meet compliance issues, Dennis says, AI can give HR managers the time to focus on big picture issues. For example, more time would be available to devote to DE&I initiatives, employee resource groups, or other high interest topics.

    “AI is one of those big things that’s going to end up being a tool for HR departments to use and [let] you focus on retaining and developing your current workforce,” Dennis said.

    Robert Lerose, special to CNBC.com

    To join the CNBC Workforce Executive Council, apply at cnbccouncils.com/wec.

    ]]>
    Sat, Jul 29 2023 11:15:22 AM
    Google's building A.I. into robots to teach them to throw out the trash https://www.nbcwashington.com/news/business/money-report/googles-building-a-i-into-robots-to-teach-them-to-throw-out-the-trash/3393673/ 3393673 post https://media.nbcwashington.com/2023/07/107239360-1690316720050-sundar-1.jpg?quality=85&strip=all&fit=300,200
  • Google announced a new artificial intelligence model on Friday that can help it train robots to understand tasks like throwing out trash.
  • The Robotics Transformer 2 (RT-2) is a vision-language-action model trained on information and images on the internet that can be translated into actions for the robot.
  • The new model nearly doubled the robots’ performance on previously-unseen scenarios, compared to the earlier version of the model, Google said.
  • Google announced a new artificial intelligence model on Friday that can help it train robots to understand tasks like throwing out trash.

    The Robotics Transformer 2 (RT-2) is a vision-language-action model trained on information and images on the internet that can be translated into actions for the robot, Google said in a blog post.

    While a task like picking up the trash sounds simple to humans, it requires an understanding of a series of tasks for a robot to learn. For example, the robot must first be able to recognize what items constitute trash, then know to pick it up and throw it away. Rather than program a robot to do those specific tasks, RT-2 allows the robot to use knowledge from around the web to help it understand how to complete the task, even if it hasn’t been explicitly trained on the exact steps.

    The new model nearly doubled the robots’ performance on previously-unseen scenarios, compared to the earlier version of the model, Google said. The new version can use rudimentary reasoning to respond to user commands, Google added.

    The company doesn’t have imminent plans to widely release or sell robots with the new technology, The New York Times reported. But eventually, they could be used in warehouses or as home assistants, the Times added.

    Subscribe to CNBC on YouTube.

    WATCH: How autonomous inventory robots could save retailers billions

    ]]>
    Fri, Jul 28 2023 09:19:34 AM
    White House secures voluntary pledges from Microsoft, Google to ensure A.I. tools are secure https://www.nbcwashington.com/news/business/money-report/white-house-secures-voluntary-pledges-from-microsoft-google-to-ensure-a-i-tools-are-secure/3388925/ 3388925 post https://media.nbcwashington.com/2023/07/107260015-1687350527308-gettyimages-1500220073-sjm-l-biden-0620-9.jpeg?quality=85&strip=all&fit=300,215
  • Seven top artificial intelligence companies, including Google, Microsoft and OpenAI, will convene at the White House on Friday.
  • They’re pledging to create ways for consumers to identify AI-generated materials and test their tools for security before public release.
  • The commitments are part of an effort from the White House to ensure AI is developed with appropriate safeguards, while not hindering innovation.
  • Seven top artificial intelligence companies, including Google, Microsoft and OpenAI, will convene at the White House on Friday, pledging to create ways for consumers to identify AI-generated materials and test their tools for security before public release.

    Amazon, Anthropic, Inflection and Meta round out the group of prospective attendees. The seven companies each agreed Friday to a set of voluntary commitments in developing AI technology.

    The commitments include:

    • Developing a way for consumers to identify AI-generated content, such as through watermarks.
    • Engaging independent experts to assess the security of their tools before releasing them to the public.
    • Sharing information on best practices and attempts to get around safeguards with other industry players, governments and outside experts.
    • Allowing third parties to look for and report vulnerabilities in their systems.
    • Reporting limitations of their technology and guiding on appropriate uses of AI tools.
    • Prioritizing research on societal risks of AI, including around discrimination and privacy.
    • Developing AI with the goal of helping mitigate societal challenges such as climate change and disease.

    Safety has emerged as a primary concern in the AI world since OpenAI’s release late last year of ChatGPT, which can reply to simple text inputs with sophisticated, creative and conversational responses. Top tech companies and investors are pumping billions of dollars into the large language models behind so-called generative AI.

    The technology carries such potential power that major players in the space have expressed public fears about moving too quickly. In an open letter in May, industry experts and leaders wrote that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The latest commitments are part of an effort by President Biden to ensure AI is developed with appropriate safeguards, while not hindering innovation. Congress is considering rules surrounding AI, though implementing standards could be months or years away as lawmakers continue to learn from experts about how the technology works and the relevant risks involved.

    The executives slated to attend the White House meeting on Friday are Amazon Web Services CEO Adam Selipsky, Anthropic CEO Dario Amodei, Google head of global affairs Kent Walker, Inflection CEO Mustafa Suleyman, Meta head of global affairs Nick Clegg, Microsoft President Brad Smith and OpenAI President Greg Brockman.

    The Biden administration said it’s already consulted with many other countries about the voluntary commitments and is working to make sure they complement international efforts when it comes to placing guardrails around the technology.

    In an interview on Friday with CNBC’s “Squawk on the Street,” Commerce Secretary Gina Raimondo called the latest pledge “a bride to regulation.”

    “It will take some time before Congress can pass a law to regulate AI,” Raimondo said. “But the President, to his great credit, also knows we don’t have time. AI is moving so fast, faster than any technology we’ve ever seen.”

    Raimondo called the pledge a “first step” but an important one.

    “These companies are committed to real transparency, working with third parties to test the models, working with the United States government to test the models and share information,” she said. “Don’t underestimate the power of that transparency and the fact that they know we are watching and their customers are watching, to hold them to account.”

    The U.S. still lacks national digital privacy protections and has been slow to regulate emerging technologies. Raimondo said AI stands in a category of it’s own, and that the administration is committed to working with Congress.

    “We can’t afford to wait on this one,” Raimondo said. “AI is different. Like the power of AI, the potential of AI, the upside and the downside is like nothing we’ve ever seen before.”

    Vice President Kamala Harris previously hosted AI CEOs and labor and civil liberties experts to weigh in on the challenges that come with AI.

    Subscribe to CNBC on YouTube.

    WATCH: How A.I. could impact jobs of outsourced coders in India

    ]]>
    Fri, Jul 21 2023 05:00:01 AM
    Actors vs. AI: Strike brings focus to emerging use of advanced tech https://www.nbcwashington.com/news/national-international/actors-vs-ai-strike-brings-focus-to-emerging-use-of-advanced-tech/3385146/ 3385146 post https://media.nbcwashington.com/2023/07/SAGAFTRA-STRIKE-AI.jpg?quality=85&strip=all&fit=300,169 The future of generative artificial intelligence in Hollywood — and how it can be used to replace labor — has become a crucial sticking point for actors going on strike.

    In a news conference Thursday, Fran Drescher, president of the Screen Actors Guild-American Federation of Television and Radio Artists (more commonly known as SAG-AFTRA), declared that “artificial intelligence poses an existential threat to creative professions, and all actors and performers deserve contract language that protects them from having their identity and talent exploited without consent and pay.”

    “If we don’t stand tall right now, we are all going to be in trouble. We are all going to be in jeopardy of being replaced by machines,” Drescher said.

    SAG-AFTRA has joined the Writer’s Guild of America, which represents Hollywood screenwriters and has been on strike for more than two months, in demanding a contract that explicitly demands AI regulations to protect writers and the works they create.

    “AI can’t write or rewrite literary material; can’t be used as source material; and [works covered by union contracts] can’t be used to train AI,” read the WGA’s demands issued on May 1.

    Read more at NBCNews.com.

    ]]>
    Fri, Jul 14 2023 05:44:56 PM