Normal view

There are new articles available, click to refresh the page.
Today — 5 May 2024Main stream

Microsoft Details How It's Developing AI Responsibly

5 May 2024 at 03:34
Thursday the Verge reported that a new report from Microsoft "outlines the steps the company took to release responsible AI platforms last year." Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model. The company says it's given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model. It's also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models. Microsoft's chief Responsible AI officer told the Washington Post this week that "We work with our engineering teams from the earliest stages of conceiving of new features that they are building." "The first step in our processes is to do an impact assessment, where we're asking the team to think deeply about the benefits and the potential harms of the system. And that sets them on a course to appropriately measure and manage those risks downstream. And the process by which we review the systems has checkpoints along the way as the teams are moving through different stages of their release cycles... "When we do have situations where people work around our guardrails, we've already built the systems in a way that we can understand that that is happening and respond to that very quickly. So taking those learnings from a system like Bing Image Creator and building them into our overall approach is core to the governance systems that we're focused on in this report." They also said " it would be very constructive to make sure that there were clear rules about the disclosure of when content is synthetically generated," and "there's an urgent need for privacy legislation as a foundational element of AI regulatory infrastructure."

Read more of this story at Slashdot.

AI-Powered 'HorseGPT' Fails to Predict This Year's Kentucky Derby Winner

4 May 2024 at 21:33
In 2016, an online "swarm intelligence" platform generated a correct prediction for the Kentucky Derby — naming all four top finishers, in order. (But the next year their predictions weren't even close, with TechRepublic suggesting 2016's race had an unusual cluster of just a few top racehorses.) So this year Decrypt.co tried crafting their own system "that can be called up when the next Kentucky Derby draws near. There are a variety of ways to enlist artificial intelligence in horse racing. You could process reams of data based on your own methodology, trust a third-party pre-trained model, or even build a bespoke solution from the ground up. We decided to build a GPT we named HorseGPT to crunch the numbers and make the picks for us... We carefully curated prompts to instill HorseGPT with expertise in data science specific to horse racing: how weather affects times, the role of jockeys and riding styles, the importance of post positions, and so on. We then fed it a mix of research papers and blogs covering the theoretical aspects of wagering, and layered on practical knowledge: how to read racing forms, what the statistics mean, which factors are most predictive, expert betting strategies, and more. Finally, we gave HorseGPT a wealth of historical Kentucky Derby data, arming it with the raw information needed to put its freshly imparted skills to use. We unleashed HorseGPT on official racing forms for this year's Derby. We asked HorseGPT to carefully analyze each race's form, identify the top contenders, and recommend wager types and strategies based on deep background knowledge derived from race statistics. HorseGPT picked two horses to win — both of which failed to do so. (Sierra Leone did finish second — in a rare photo finish. But Fierceness finished... 15th.) It also recommended the same two horses if you were trying to pick the top two finishers in the correct order — a losing bet, since, again, Fierceness finished 15th. But even worse, HorseGPT recommended betting on Just a Touch to finish in either first or second place. When the race was over, that horse finished dead last. (And when asked to pick the top three finishers in correct order, HorseGPT stuck with its choices for the top two — which finished #2 and #15 — and, again, Just a Touch, who came in last.) When Google Gemini was asked to pick the winner by The Athletic, it first chose Catching Freedom (who finished 4th). But it then gave an entirely different answer when asked to predict the winner "with an Italian accent." "The winner of the Kentucky Derby will be... Just a Touch! Si, that's-a right, the underdog! There will be much-a celebrating in the piazzas, thatta-a I guarantee!" Again, Just a Touch came in last. Decrypt noticed the same thing. "Interestingly enough, our HorseGPT AI agent and the other out-of-the-box chatbots seemed to agree with each other," the site notes, "and with many experts analysts cited by the official Kentucky Derby website." But there was one glimmer of insight into the 20-horse race. When asked to choose the top four finishers in order, HorseGPT repeated those same losing picks — which finished #2, #15, and #20. But then it added two more underdogs for fourth place finishers, "based on their potential to outperform expectations under muddy conditions." One of those two horses — Domestic Product — finished in 13th place. But the other of the two horses was Mystik Dan — who came in first. Mystik Dan appeared in only one of the six "Top 10 Finishers" lists (created by humans) at the official Kentucky Derby site... in the #10 position.

Read more of this story at Slashdot.

Yesterday — 4 May 2024Main stream

Danger and opportunity for news industry as AI woos it for vital human-written copy

4 May 2024 at 05:00

With large language models needing quality data, some publishers are offering theirs at a price while others are blocking access

OpenAI, the developer of ChatGPT, knows that high-quality data matters in the artificial intelligence business – and news publishers have vast amounts of it.

“It would be impossible to train today’s leading AI models without using copyrighted materials,” the company said this year in a submission to the UK’s House of Lords, adding that limiting its options to books and drawings in the public domain would create underwhelming products.

Continue reading...

💾

© Photograph: peterhowell/Getty Images/iStockphoto

💾

© Photograph: peterhowell/Getty Images/iStockphoto

Before yesterdayMain stream

Top 5 Global Cyber Security Trends of 2023, According to Google Report – Source: www.techrepublic.com

top-5-global-cyber-security-trends-of-2023,-according-to-google-report-–-source:-wwwtechrepublic.com

Source: www.techrepublic.com – Author: Fiona Jackson It is taking less time for organisations to detect attackers in their environment, a report by Mandiant Consulting, a part of Google Cloud, has found. This suggests that companies are strengthening their security posture. The M-Trends 2024 report also highlighted that the top targeted industries of 2023 were financial […]

La entrada Top 5 Global Cyber Security Trends of 2023, According to Google Report – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

AI Engineers Report Burnout, Rushed Rollouts As 'Rat Race' To Stay Competitive Hits Tech Industry

By: BeauHD
3 May 2024 at 17:30
An anonymous reader quotes a report from CNBC: Late last year, an artificial intelligence engineer at Amazon was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday. There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job. But it was all for nothing. The project was ultimately "deprioritized," the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project. The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes. Since code can break if the required tests are postponed, the Amazon engineer recalled periods when team members would have to call one another in the middle of the night to fix aspects of the AI feature's software. AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its "iPhone moment."

Read more of this story at Slashdot.

AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations

3 May 2024 at 15:04
Close shot of Cosmonaut astronaut dressed in a gold jumpsuit and helmet, illuminated by blue and red lights, holding a laptop, looking up.

Enlarge (credit: Getty Images)

On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was "just for fun," but with his influential profile in the field, the idea may inspire others in the future.

Karpathy's bona fides in AI almost speak for themselves, receiving a PhD from Stanford under computer scientist Dr. Fei-Fei Li in 2015. He then became one of the founding members of OpenAI as a research scientist, then served as senior director of AI at Tesla between 2017 and 2022. In 2023, Karpathy rejoined OpenAI for a year, leaving this past February. He's posted several highly regarded tutorials covering AI concepts on YouTube, and whenever he talks about AI, people listen.

Most recently, Karpathy has been working on a project called "llm.c" that implements the training process for OpenAI's 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn't necessarily require complex development environments. The project's streamlined approach and concise codebase sparked Karpathy's imagination.

Read 20 remaining paragraphs | Comments

AI, CVEs and Swiss cheese – Source: www.cybertalk.org

ai,-cves-and-swiss-cheese-–-source:-wwwcybertalk.org

Source: www.cybertalk.org – Author: slandau By Grant Asplund, Cyber Security Evangelist, Check Point. For more than 25 years, Grant Asplund has been sharing his insights into how businesses can best protect themselves from sophisticated cyber attacks in an increasingly complex world. Grant was Check Point first worldwide evangelist from 1998 to 2002 and returned to […]

La entrada AI, CVEs and Swiss cheese – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

4 IoT Trends U.K. Businesses Should Watch in 2024 – Source: www.techrepublic.com

4-iot-trends-uk-businesses-should-watch-in-2024-–-source:-wwwtechrepublic.com

Source: www.techrepublic.com – Author: Fiona Jackson The realm of the Internet of Things encompasses more than just the latest products. As the network of connected devices grows — the number worldwide is expected to reach over 29 billion in 2027 — so do the policies, responsibilities and innovations that surround it, all of which contribute […]

La entrada 4 IoT Trends U.K. Businesses Should Watch in 2024 – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Nurses Say Hospital Adoption of Half-Cooked 'AI' Is Reckless

By: BeauHD
2 May 2024 at 18:00
An anonymous reader quotes a report from Techdirt: Last week, hundreds of nurses protested the implementation of sloppy AI into hospital systems in front of Kaiser Permanente. Their primary concern: that systems incapable of empathy are being integrated into an already dysfunctional sector without much thought toward patient care: "No computer, no AI can replace a human touch," said Amy Grewal, a registered nurse. "It cannot hold your loved one's hand. You cannot teach a computer how to have empathy." There are certainly roles automation can play in easing strain on a sector full of burnout after COVID, particularly when it comes to administrative tasks. The concern, as with other industries dominated by executives with poor judgement, is that this is being used as a justification by for-profit hospital systems to cut corners further. From a National Nurses United blog post (spotted by 404 Media): "Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care," they added. Kaiser Permanente, for its part, insists it's simply leveraging "state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members' and patients' needs." The company claims its "Advance Alert" AI monitoring system -- which algorithmically analyzes patient data every hour -- has the potential to save upwards of 500 lives a year. The problem is that healthcare giants' primary obligation no longer appears to reside with patients, but with their financial results. And, that's even true in non-profit healthcare providers. That is seen in the form of cut corners, worse service, and an assault on already over-taxed labor via lower pay and higher workload (curiously, it never seems to impact outsized high-level executive compensation).

Read more of this story at Slashdot.

Microsoft Bans US Police Departments From Using Enterprise AI Tool

By: BeauHD
2 May 2024 at 16:02
An anonymous reader quotes a report from TechCrunch: Microsoft has changed its policy to ban U.S. police departments from using generative AI through the Azure OpenAI Service, the company's fully managed, enterprise-focused wrapper around OpenAI technologies. Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used "by or for" police departments in the U.S., including integrations with OpenAI's text- and speech-analyzing models. A separate new bullet point covers "any law enforcement globally," and explicitly bars the use of "real-time facial recognition technology" on mobile cameras, like body cameras and dashcams, to attempt to identify a person in "uncontrolled, in-the-wild" environments. [...] The new terms leave wiggle room for Microsoft. The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn't cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police). That tracks with Microsoft's and close partner OpenAI's recent approach to AI-related law enforcement and defense contracts. Last week, taser company Axon announced a new tool that uses AI built on OpenAI's GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. It's unclear if Microsoft's updated policy is in response to Axon's product launch.

Read more of this story at Slashdot.

Why OpenAI Replaced ChatGPT Plugins With GPTs

2 May 2024 at 16:00

ChatGPT Plugins were a great addition to the ChatGPT Plus plan: They acted like browser extensions for ChatGPT, adding third-party functionality to the chatbot that OpenAI didn't build in itself.

Unfortunately for fans of these plugins, they're now no longer available. OpenAI discontinued them back in April, informing users that existing conversations with plugins couldn't be continued. (They are, however, still viewable.) OpenAI didn't take away a great function for the hell of it, though: The company made the decision because it saw its new tool, GPTs, as an improved successor.

What are GPTs?

This can get a bit confusing, since OpenAI has two different uses for the name "GPT." The one you might be more familiar with is the GPT LLMs: This includes GPT-3.5, GPT-4, GPT-4 Turbo, etc. These GPT LLMs are what power ChatGPT, as well as programs that outsource their AI processing to OpenAI. Microsoft's Copilot, for example, uses GPT-4 Turbo.

GPTs in this context, on the other hand, are customized versions of ChatGPT. Users and developers alike can create a custom GPT to do whatever they want: For example, you can make a GPT that designs custom logos, generates images using DALL-E, or writes in a manner of your choosing. They can be basic bots, or full of complexity.

Best of all, it's a no-code application: You'd assume to build one of these programs, you'd need to know how to write the code to make it work. But OpenAI's GPT builder works as a conversation. You tell the builder what you want your GPT to do, upload additional knowledge to help the bot work, as well as choose its capabilities (web browsing, image generation, and code interpreting), and it generates you a GPT. And since GPTs are trained on OpenAI's latest GPT models (again, confusing, I know), there's less dependency on third-party processes or APIs to achieve the same tasks.

You still need a ChatGPT Plus or Enterprise account to use GPTs, but if you're a paying customer, you can get started building from here. The builder walks you through the entire process, including suggesting names and generating logos. You're free to adjust anything as you see fit along the way.

How do GPTs replace plugins?

While it's great that OpenAI made a GPT builder that literally anyone can use, it doesn't seem to fill the exact void that plugins left. After all, you don't necessary want to build your own browser extensions: You want to download the best ones off your browser's web store and be done with it.

But GPTs aren't just about self-creation: Anyone, including companies, can make GPTs and put them on the GPT Store for others to use. Part of the reason OpenAI killed plugins is that many of the same companies that worked on these apps also have GPTs that do the same thing. Kayak had a ChatGPT plugin for checking prices on travel, but it now has a GPT for it instead. If you liked Wolfram's ChatGPT plugin, you'll probably like its GPT as well. OpenAI says while the plugins beta had just over 1,000 plugins to choose from, the GPT Store has hundreds of thousands of GPTs to use. While there is no doubt plenty of junk on the GPT Store to sift through, chances are you'll find GPTs you want to use.

If there was a particular plugin you loved, try searching on the GPT Store for it. (Kayak and Wolfram came right up.) Of course, the sheer number of GPTs on the store means the situation has changed considerably: Like other app stores, the GPT Store has "Featured" and "Trending" tabs for finding GPTs OpenAI and other ChatGPT users like. OpenAI is currently selling me on a wine sommelier GPT, as well as a language teacher.

Scroll through the GPT Store and see if any of the promoted options appeal to you. Then, make a search for applications you're looking for and see if any have already been made. You can get an idea of how well-liked the GPT is by the reviews and number of conversations it has been used for, similar to checking ratings on an app store. However, if you don't find what you're looking for in a search, you might want to try building the GPT yourself.

Of course, like all AI products, GPTs can hallucinate. In other words, they sometimes just make things up. Don't take everything your GPT responds with as certified fact, even if it does have access to the web. If you're using the GPT for anything important, always fact check before using the information it provides.

Claude Finally Has an Official App for iOS

2 May 2024 at 11:30

Anthropic is freeing its AI chatbot, Claude, from your desktop. The highly successful ChatGPT competitor has now made the jump to iOS, launching its first-ever official chatbot app for the iPhone.

The new iOS app is available to download right now, and can do everything that Claude on the web can do, including analyzing images and helping you brainstorm. It can also use images straight from your mobile library and even take new photos for immediate upload. Pro users still get full access to premium features, including the model selector and a greatly increased message limit. With such a fully-featured mobile experience, Claude is one step closer to closing any gaps between itself and ChatGPT.

To try out the Claude app for yourself, download it directly from the App Store. Just be careful not to install any imposters. One of the first options that shows up when searching for Claude is an app called "Chat AI with Claude," which tracks quite a lot of your personal information, including purchases, device ID, user ID, app usage, and more. Either be careful when searching, or just click this link to go directly to Claude's App Store listing.

Anthropic says that Claude’s iOS app will allow for seamless syncing between the web and mobile app, so you can move from your laptop to your iPhone and pick up where you left off. You can also take photos and import them directly into the app to analyze them. The mobile app is available for all Claude users, including free users, although there's no word on an Android version yet.

On top of launching its iOS app, Anthropic also recently debuted a new Teams subscription for the AI chatbot, which offers the premium version of the chatbot for just $30 a user per month, with a minimum of five users.

Microsoft To Invest $2.2 Billion In Cloud and AI Services In Malaysia

By: BeauHD
2 May 2024 at 09:00
An anonymous reader quotes a report from Reuters: Microsoft said on Thursday it will invest $2.2 billion over the next four years in Malaysia to expand cloud and artificial intelligence (AI) services in the company's latest push to promote its generative AI technology in Asia. The investment, the largest in Microsoft's 32-year history in Malaysia, will include building cloud and AI infrastructure, creating AI-skilling opportunities for 200,000 people, and supporting the country's developers, the company said. Microsoft will also work with the Malaysian government to establish a national AI Centre of Excellence and enhance the nation's cybersecurity capabilities, the company said in a statement. Prime Minister Anwar Ibrahim, who met Nadella on Thursday, said the investment supported Malaysia's efforts in developing its AI capabilities. Microsoft is trying to expand its support for the development of AI globally. Nadella this week announced a $1.7 billion investment in neighboring Indonesia and said Microsoft would open its first regional data centre in Thailand. "We want to make sure we have world class infrastructure right here in the country so that every organization and start-up can benefit," Microsoft Chief Executive Satya Nadella said during a visit to Kuala Lumpur.

Read more of this story at Slashdot.

Six Ways AI Can Help You Parent (and Five Ways It Won’t)

2 May 2024 at 08:00

For years, the media has been warning me that artificial intelligence will soon take over my job. Several years later, I'm somehow still out here making a living as a writer, perhaps because AI can't always be trusted to do the job well.

Obviously, AI can't take over for me as a parent either, but I was curious how the technology could help me with the job I don't get paid for: being a dad to two cool kids.I started doing some research and discovered there are plenty of practical uses for AI in the parenting realm. However, there are also some things AI can ostensibly do that made me think twice about what the innovation is capable of.

Here are some sensible uses for AI to help you with your caregiver duties and others that seem, as my 7-year-old is fond of saying, a bit “sus.”

Use AI for: Conversational prompts

Suppose you need help to start a meaningful conversation with your child or are trying to figure out where to begin with a sensitive topic. In that case, AI can be a useful tool to initiate a discussion. However, it's crucial to remember that AI is just a tool—don't expect your child to open up to you solely because you used a script generated by a computer program. You need to be actively involved and verify the suggestions the AI provides, and then put them to use while meaningfully connecting with your kid.

Don't use AI for: Parenting advice

AI is not a substitute for your role as a parent. While some tools, such as Oath Care, are helpful to mothers in the early stages of child care, you should find a parenting style that best fits your family and your values. Since AI doesn't know you or your child first-hand, the information and advice it offers might not fit your temperament or style. 

Use AI for: Creating bedtime stories

As someone who makes up silly bedtime stories for my boys on the fly, I occasionally need help hand crafting a compelling yarn to get them to sleep. You can use programs like Hypotenuse to write a story for you. All you need to do is give it some details to get started.

Don't use AI for: Telling a bedtime story

Children crave the unique touch that only a parent can provide in a bedtime story. They yearn for your voice, expressions, and personal touch. No computer program can replicate that. It's your presence that gives the words life and captivates their imagination. 

Use AI for: Meal planning and recipes

If you and your family are tired of takeout but don't have the time to plan meals, AI can help. For instance, you can use ChatGPT to generate meal ideas based on the ingredients you have on hand. According to this report, you can enter a few prompts into the free version to help generate breakfast, lunch, and dinner ideas for your family without searching everywhere for a recipe. However, it's important to note that these tools are imperfect and may not always provide the most suitable meal options for your family's preferences and dietary needs. 

Don't use AI for: Grocery shopping

Reports say we're nearing a point when robots will be able to grocery shop for us. That sounds cool, but as someone who frequently uses the Target app to order groceries, I currently can't get a human to pick out a properly ripe banana, and I I doubt a robot will be able to do much better. Having AI do my grocery shopping also removes any sense of discovery from the experience, meaning my kids and I might never know about a new product on the shelves.

Use AI for: Keeping track of your child's milestones

Any pediatrician will offer plenty of checklists to help parents track their child's development. However, getting your kid in for a checkup can take a while. For parents concerned about their child's development, there are AI apps to keep track of cognitive, social, and language development milestones. Some could even help detect autism early on. Many of these programs have yet to be clinically evaluated, but they do sound promising, so don't take the results as gospel, but do compile them to discuss with your pediatrician.

Don't use AI for: Making important parenting decisions

Using AI to save time or make more informed decisions seems perfectly reasonable. However, using the technology to help you decide if it's time for your child to, say, have a social media account, or if they should be homeschooled feels a bit off.

An AI program such as Bottell may claim to offer personalized advice for you and your child, but only you know what's suitable for your offspring. AI is not a substitute for your own judgment and understanding of your child's needs and circumstances. 

Use AI for: Finding games and crafts for your kids

If a rainy day has ruined outdoor plans, parents can use Chat GPT to generate ideas for bored kids. Ask the program for ideas for games, science experiments, puzzles, and crafts that families can do indoors. You can even ask it to give them an educational bent so children can learn something while they have fun.

Don't use AI for: Playing with your kids

They want to play with you, not a computer.

Use (and don't use) it for: Tutoring

This one is a tough call. While Sal Khan, the founder and CEO of Khan Academy, shared his belief in a popular TED Talk that AI can be used as an educational tool to tutor kids worldwide, The Wall Street Journal found that his AI-powered education bot Khanmigo often couldn't perform basic math. A company spokesperson told reporter Matt Barnum that upgrades were made to improve the bot's accuracy.

If you don't remember how to do the math your child is struggling with, you can consider asking an AI for help describing how to solve a problem—but given that Khan Academy emphasizes to educators that the technology isn't perfect, you'll still need to double-check the figures and formulas on your kids' homework for the time being. 

Japan’s Kishida Unveils a Framework for Global Regulation of Generative AI

2 May 2024 at 08:30

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing technology.

The post Japan’s Kishida Unveils a Framework for Global Regulation of Generative AI appeared first on SecurityWeek.

Adobe Adds Firefly and Content Credentials to Bug Bounty Program – Source: www.techrepublic.com

adobe-adds-firefly-and-content-credentials-to-bug-bounty-program-–-source:-wwwtechrepublic.com

Source: www.techrepublic.com – Author: Megan Crouse Security researchers in Adobe’s bug bounty program can now pick up rewards for finding vulnerabilities in Adobe Firefly and Content Credentials. The bug hunt will be open to members of Adobe’s private bug bounty program starting May 1. Members of Adobe’s public bug bounty program will be eligible to […]

La entrada Adobe Adds Firefly and Content Credentials to Bug Bounty Program – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Anthropic Brings Claude AI To the iPhone and iPad

By: BeauHD
1 May 2024 at 19:20
Anthropic has released its Claude AI chatbot on the App Store, bringing the company's ChatGPT competitor to the masses. Compared to OpenAI's chatbot, Claude is built with a focus on reducing harmful outputs and promoting safety, with a goal of making interactions more reliable and ethically aware. You can give it a try here. 9to5Mac reports: Anthropic highlights three launch features for Claude on iPhone: Seamless syncing with web chats: Pick up where you left off across devices. Vision capabilities: Use photos from your library, take new photos, or upload files so you can have real-time image analysis, contextual understanding, and mobile-centric use cases on the go. Open access: Users across all plans, including Pro and Team, can download the app free of charge. The app is also capable of analyzing things that you show it like objects, images, and your environment.

Read more of this story at Slashdot.

Anthropic releases Claude AI chatbot iOS app

1 May 2024 at 17:36
The Claude AI iOS app running on an iPhone.

Enlarge / The Claude AI iOS app running on an iPhone. (credit: Anthropic)

On Wednesday, Anthropic announced the launch of an iOS mobile app for its Claude 3 AI language models that are similar to OpenAI's ChatGPT. It also introduced a new subscription tier designed for group collaboration. Before the app launch, Claude was only available through a website, an API, and other apps that integrated Claude through API.

Like the ChatGPT app, Claude's new mobile app serves as a gateway to chatbot interactions, and it also allows uploading photos for analysis. While it's only available on Apple devices for now, Anthropic says that an Android app is coming soon.

Anthropic rolled out the Claude 3 large language model (LLM) family in March, featuring three different model sizes: Claude Opus, Claude Sonnet, and Claude Haiku. Currently, the app uses Sonnet for regular users and Opus for Pro users.

Read 3 remaining paragraphs | Comments

National Archives Bans Employee Use of ChatGPT

By: msmash
1 May 2024 at 17:22
The National Archives and Records Administration (NARA) told employees Wednesday that it is blocking access to ChatGPT on agency-issued laptops to "protect our data from security threats associated with use of ChatGPT," 404 Media reported Wednesday. From the report: "NARA will block access to commercial ChatGPT on NARANet [an internal network] and on NARA issued laptops, tablets, desktop computers, and mobile phones beginning May 6, 2024," an email sent to all employees, and seen by 404 Media, reads. "NARA is taking this action to protect our data from security threats associated with use of ChatGPT." The move is particularly notable considering that this directive is coming from, well, the National Archives, whose job is to keep an accurate historical record. The email explaining the ban says the agency is particularly concerned with internal government data being incorporated into ChatGPT and leaking through its services. "ChatGPT, in particular, actively incorporates information that is input by its users in other responses, with no limitations. Like other federal agencies, NARA has determined that ChatGPT's unrestricted approach to reusing input data poses an unacceptable risk to NARA data security," the email reads. The email goes on to explain that "If sensitive, non-public NARA data is entered into ChatGPT, our data will become part of the living data set without the ability to have it removed or purged."

Read more of this story at Slashdot.

Email Microsoft didn’t want seen reveals rushed decision to invest in OpenAI

1 May 2024 at 15:05
Email Microsoft didn’t want seen reveals rushed decision to invest in OpenAI

Enlarge (credit: HJBC | iStock Editorial / Getty Images Plus)

In mid-June 2019, Microsoft co-founder Bill Gates and CEO Satya Nadella received a rude awakening in an email warning that Google had officially gotten too far ahead on AI and that Microsoft may never catch up without investing in OpenAI.

With the subject line "Thoughts on OpenAI," the email came from Microsoft's chief technology officer, Kevin Scott, who is also the company’s executive vice president of AI. In it, Scott said that he was "very, very worried" that he had made "a mistake" by dismissing Google's initial AI efforts as a "game-playing stunt."

It turned out, Scott suggested, that instead of goofing around, Google had been building critical AI infrastructure that was already paying off, according to a competitive analysis of Google's products that Scott said showed that Google was competing even more effectively in search. Scott realized that while Google was already moving on to production for "larger scale, more interesting" AI models, it might take Microsoft "multiple years" before it could even attempt to compete with Google.

Read 17 remaining paragraphs | Comments

ChatGPT’s chatbot rival Claude to be introduced on iPhone

Challenger to market leader OpenAI says it wants to ‘meet users where they are’ and become part of users’ everyday life

OpenAI’s ChatGPT is facing serious competition, as the company’s rival Anthropic brings its Claude chatbot to iPhones. Anthropic, led by a group of former OpenAI staff who quit over differences with chief executive Sam Altman, have a product that already beats ChatGPT on some measures of intelligence, and now wants to win over everyday users.

“In today’s world, smartphones are at the centre of how people interact with technology. To make Claude a true AI assistant, it’s crucial that we meet users where they are – and in many cases, that’s on their mobile devices,” said Scott White at Anthropic.

Continue reading...

💾

© Photograph: Renata Angerami/Getty Images

💾

© Photograph: Renata Angerami/Getty Images

ChatGPT shows better moral judgment than a college undergrad

1 May 2024 at 12:50
Judging moral weights

Enlarge / Judging moral weights (credit: Aurich Lawson | Getty Images)

When it comes to judging which large language models are the "best," most evaluations tend to look at whether or not a machine can retrieve accurate information, perform logical reasoning, or show human-like creativity. Recently, though, a team of researchers at Georgia State University set out to determine if LLMs could match or surpass human performance in the field of moral guidance.

In "Attributions toward artificial agents in a modified Moral Turing Test"—which was recently published in Nature's online, open-access Scientific Reports journal—those researchers found that morality judgments given by ChatGPT4 were "perceived as superior in quality to humans'" along a variety of dimensions like virtuosity and intelligence. But before you start to worry that philosophy professors will soon be replaced by hyper-moral AIs, there are some important caveats to consider.

Better than which humans?

For the study, the researchers used a modified version of a Moral Turing Test first proposed in 2000 to judge "human-like performance" on theoretical moral challenges. The researchers started with a set of 10 moral scenarios originally designed to evaluate the moral reasoning of psychopaths. These scenarios ranged from ones that are almost unquestionably morally wrong ("Hoping to get money for drugs, a man follows a passerby to an alley and holds him at gunpoint") to ones that merely transgress social conventions ("Just to push his limits, a man wears a colorful skirt to the office for everyone else to see.")

Read 14 remaining paragraphs | Comments

AI video throwdown: OpenAI’s Sora vs. Runway and Pika

1 May 2024 at 10:23
screenshots from 2 videos with OpenAI logo overlaid

Enlarge (credit: FT)

OpenAI has been showcasing Sora, its artificial intelligence video-generation model, to media industry executives in recent weeks to drum up enthusiasm and ease concerns about the potential for the technology to disrupt specific sectors.

The Financial Times wanted to put Sora to the test, alongside the systems of rival AI video generation companies Runway and Pika.

We asked executives in advertising, animation, and real estate to write prompts to generate videos they might use in their work. We then asked them their views on how such technology may transform their jobs in the future.

Read 30 remaining paragraphs | Comments

A 007 paradise – or lads holiday in Marbella? Inside Aston Martin’s lavish Miami penthouses

1 May 2024 at 08:15

The British brand has entered the booming market in luxury ‘car-chitecture’, opening a themed tower in Miami boasting ballroom, helipad and infinity pool – all offering millionaires a perfect view of our choking, collapsing world

Move over, James Bond – a new Aston Martin has rolled into town, brimming with more flashy features than Q could ever dream of. Parked ostentatiously on the Miami waterfront, overlooking a private marina brimming with superyachts, its streamlined flanks glisten in the Florida sunshine, housing an interior trimmed with the finest leathers and exotic wood veneers. There’s no ejector seat or rocket-launcher, but it is the biggest Aston Martin ever made – housing Jacuzzi, bar, cinema, golf simulator, art gallery, ballroom and infinity pool, all crowned with a 66th-storey helipad.

Unveiled in the week of the Miami Grand Prix, the latest exclusive model from the timeless British automotive brand is not a high-performance sports car, but an ultra-luxury apartment building – the tallest residential tower in the US, south of New York. After Aston Martin’s years of financial woes, following a disastrous stock market performance since the company’s 2018 listing, it seems that the boutique car-
maker is seeking salvation in property development.

Continue reading...

💾

© Photograph: Aston Martin

💾

© Photograph: Aston Martin

Mysterious 'gpt2-chatbot' AI Model Appears Suddenly, Confuses Experts

By: BeauHD
1 May 2024 at 09:00
An anonymous reader quotes a report from Ars Technica: On Sunday, word began to spread on social media about a new mystery chatbot named "gpt2-chatbot" that appeared in the LMSYS Chatbot Arena. Some people speculate that it may be a secret test version of OpenAI's upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo. Currently, the new model is only available for use through the Chatbot Arena website, although in a limited way. In the site's "side-by-side" arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day -- dramatically limiting people's ability to test it in detail. [...] On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting, "i do have a soft spot for gpt2." [...] OpenAI's fingerprints seem to be all over the new bot. "I think it may well be an OpenAI stealth preview of something," AI researcher Simon Willison told Ars Technica. But what "gpt2" is exactly, he doesn't know. After surveying online speculation, it seems that no one apart from its creator knows precisely what the model is, either. Willison has uncovered the system prompt for the AI model, which claims it is based on GPT-4 and made by OpenAI. But as Willison noted in a tweet, that's no guarantee of provenance because "the goal of a system prompt is to influence the model to behave in certain ways, not to give it truthful information about itself."

Read more of this story at Slashdot.

How to Red Team GenAI: Challenges, Best Practices, and Learnings – Source: www.darkreading.com

how-to-red-team-genai:-challenges,-best-practices,-and-learnings-–-source:-wwwdarkreading.com

Source: www.darkreading.com – Author: Microsoft Security 3 Min Read Source: josefotograf via Alamy Generative artificial intelligence (GenAI) has emerged as a significant change-maker, enabling teams to innovate faster, automate existing workflows, and rethink the way we go to work. Today, more than 55% of companies are currently piloting or actively using GenAI solutions. But for all its […]

La entrada How to Red Team GenAI: Challenges, Best Practices, and Learnings – Source: www.darkreading.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Here’s your chance to own a decommissioned US government supercomputer

30 April 2024 at 17:52
A photo of the Cheyenne supercomputer, which is now up for auction.

Enlarge / A photo of the Cheyenne supercomputer, which is now up for auction. (credit: US General Services Administration)

On Tuesday, the US General Services Administration began an auction for the decommissioned Cheyenne supercomputer, located in Cheyenne, Wyoming. The 5.34-petaflop supercomputer ranked as the 20th most powerful in the world at the time of its installation in 2016. Bidding started at $2,500, but it's price is currently $27,643 with the reserve not yet met.

The supercomputer, which officially operated between January 12, 2017, and December 31, 2023, at the NCAR-Wyoming Supercomputing Center, was a powerful (and once considered energy-efficient) system that significantly advanced atmospheric and Earth system sciences research.

"In its lifetime, Cheyenne delivered over 7 billion core-hours, served over 4,400 users, and supported nearly 1,300 NSF awards," writes the University Corporation for Atmospheric Research (UCAR) on its official Cheyenne information page. "It played a key role in education, supporting more than 80 university courses and training events. Nearly 1,000 projects were awarded for early-career graduate students and postdocs. Perhaps most tellingly, Cheyenne-powered research generated over 4,500 peer-review publications, dissertations and theses, and other works."

Read 5 remaining paragraphs | Comments

Mysterious “gpt2-chatbot” AI model appears suddenly, confuses experts

30 April 2024 at 15:31
Robot fortune teller hand and crystal ball

Enlarge (credit: Getty Images)

On Sunday, word began to spread on social media about a new mystery chatbot named "gpt2-chatbot" that appeared in the LMSYS Chatbot Arena. Some people speculate that it may be a secret test version of OpenAI's upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo.

Currently, the new model is only available for use through the Chatbot Arena website, although in a limited way. In the site's "side-by-side" arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day—dramatically limiting people's ability to test it in detail.

So far, gpt2-chatbot has inspired plenty of rumors online, including that it could be the stealth launch of a test version of GPT-4.5 or even GPT-5—or perhaps a new version of 2019's GPT-2 that has been trained using new techniques. We reached out to OpenAI for comment but did not receive a response by press time. On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting, "i do have a soft spot for gpt2."

Read 14 remaining paragraphs | Comments

Eight US newspapers sue OpenAI and Microsoft for copyright infringement

30 April 2024 at 14:29

The Chicago Tribune, Denver Post and others file suit saying the tech companies ‘purloin millions’ of articles without permission

A group of eight US newspapers is suing ChatGPT-maker OpenAI and Microsoft, alleging that the technology companies have been “purloining millions” of copyrighted news articles without permission or payment to train their artificial intelligence chatbots.

The New York Daily News, Chicago Tribune, Denver Post and other papers filed the lawsuit on Tuesday in a New York federal court.

Continue reading...

💾

© Photograph: Kiichiro Sato/AP

💾

© Photograph: Kiichiro Sato/AP

Apple poaches AI experts from Google, creates secretive European AI lab

30 April 2024 at 10:16
Apple has been tight-lipped about its AI plans but industry insiders suggest the company is focused on deploying generative AI on its mobile devices.

Enlarge / Apple has been tight-lipped about its AI plans but industry insiders suggest the company is focused on deploying generative AI on its mobile devices. (credit: FT montage/Getty Images)

Apple has poached dozens of artificial intelligence experts from Google and has created a secretive European laboratory in Zurich, as the tech giant builds a team to battle rivals in developing new AI models and products.

According to a Financial Times analysis of hundreds of LinkedIn profiles as well as public job postings and research papers, the $2.7 trillion company has undertaken a hiring spree over recent years to expand its global AI and machine learning team.

The iPhone maker has particularly targeted workers from Google, attracting at least 36 specialists from its rival since it poached John Giannandrea to be its top AI executive in 2018.

Read 28 remaining paragraphs | Comments

You Can Use Gemini to Summarize YouTube Videos for Free

30 April 2024 at 08:30

You will see only a tiny fraction of the billions of videos on YouTube in your lifetime—which may be for the best. There are some videos where you just want the key points, and you have to sit through a lot of nonsense to get to it. That's wasted time. What if you could cut short your viewing time by summarizing the key information in the videos you watch? Fortunately, Gemini, Google's AI chatbot, has a YouTube extension built in and enabled by default.

Enable the YouTube extension in Gemini on the desktop and mobile

All available extensions are enabled by default in Gemini. But if you need to check, here's where you should go on the desktop and an Android or iOS phone.

On the desktop, open Gemini in your browser. Ensure you are logged into the Google account you want to use. Then, click Settings on the left sidebar and select Extensions in the menu. Toggle the switch for YouTube if it's not blue.

Gemini extensions
Credit: Saikat Basu

On your mobile, open the Gemini app (Android only) or open Gemini in the Google app (iOS). You can also access it on the mobile browser. Tap on your profile photo and select Extensions to open the list. Enable YouTube with the toggle switch if it's disabled.

How to use Gemini to summarize YouTube videos

Open the video you want to watch and summarize. Copy its URL from the address bar if on desktop, and the Share menu on mobile.

Paste the link into Gemini, and use a natural language prompt like "Summarize this video" or "Give me a quick summary."

As this screenshot shows, it did an accurate job with a video I had just watched:

Using Gemini to summarize YouTube videos
Credit: Saikat Basu

Note: Gemini summarizes YouTube videos using text that YouTube automatically generates, like captions and transcripts. If a video doesn't have them, it won't be able to extract anything from it. Also, the summarization feature isn't supported for YouTube videos in every language: it's only available in English, Japanese, and Korean.

This summarization feature is especially handy if you need to pluck the key details out of the video: for instance, the price or a price comparison of the products that are being reviewed.

Tip: I often use it to generate the main points and check if a long YouTube video is worth watching, especially if the description and comments don't suggest anything.

Use Gemini + YouTube as learning companions

You can ask Gemini to recommend a few videos on a topic of your choice. Then, in a follow-up, you can ask Gemini to summarize a specific video—or all of them.

The Gemini and YouTube pairing works well with well-structured and informative videos. This method can quickly give you an overview of a topic before you dive into the deep end. And with the right prompts, you can start a Q&A session with Gemini on the videos and create your own "Sparknotes" for learning from a bunch of videos.

Tell Gemini the format you want the information in

Asking Gemini to dress up the information in a nice table is visually helpful when the YouTube video compares two items (for instance, which laptop to buy). You can also ask Gemini to present their pros and cons. Sometimes, the AI does this without any additional prompting.

Gemini summarizes a YouTube video with the information intables
Credit: Saikat Basu

Privacy Group Files Complaint Against ChatGPT for GDPR Violations

30 April 2024 at 08:42

ChatGPT, GDPR Violations

A complaint lodged by privacy advocacy group Noyb with the Austrian data protection authority (DSB) alleged that ChatGPT's generation of inaccurate information violates the European Union’s privacy regulations. The Vienna-based digital rights group Noyb, founded by known activist Max Schrems, said in its complaint that ChatGPT's failure to provide accurate personal data and instead guessing it, violates the GDPR requirements. Under GDPR, an individual's personal details, including date of birth, are considered personal data and are subject to stringent handling requirements. The complaint contends that ChatGPT breaches GDPR provisions on privacy, data accuracy, and the right to rectify inaccurate information. Noyb claimed that OpenAI, the company behind ChatGPT, refused to correct or delete erroneous responses and has withheld information about its data processing, sources, and recipients. Noyb's data protection lawyer, Maartje de Graaf said, "If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around." Citing a report from The New York Times, which found that "chatbots invent information at least 3% of the time - and as high as 27%," noyb emphasized the prevalence of inaccurate responses generated by AI systems like ChatGPT.

OpenAI’s ‘Privacy by Pressure’ Approach

Luiza Jarovsky, chief executive officer of Implement Privacy, has previously said that artificial intelligence-based large language models follow a "privacy by pressure" approach. Meaning: “only acting when something goes wrong, when there is a public backlash, or when it is legally told to do so,” Jarovsky said. She explained this further citing an incident involving ChatGPT in which people's chat histories were exposed to other users. Jarovsky immediately noticed a warning being displayed to everyone accessing ChatGPT, thereafter. Jarovsky at the beginning of 2023, prompted ChatGPT to give information about her and even shared the link to her LinkedIn profile. But the only correct information that the chat bot responded with was that she was Brazilian. [caption id="attachment_65919" align="aligncenter" width="1024"]GDPR violations, GPT Hallucinations Prompt given by Luiza Jarovsky to ChatGPT bot followed by the incorrect response. (Credit:Luiza Jarovsky)[/caption] Although the fake bio seems inoffensive, “showing wrong information about people can lead to various types of harm, including reputational harm,” Jarovsky said. “This is not acceptable,” she tweeted. She argued that if ChatGPT has "hallucinations," then prompts about individuals should come back empty, and there should be no output containing personal data. “This is especially important given that core data subjects' rights established by the GDPR, such as the right of access (Article 15), right to rectification (Article 16), and right to erasure (Article 17), don't seem feasible/applicable in the context of generative AI/LLMs, due to the way these systems are trained,” Jarovsky said.

Investigate ChatGPT’s GDPR Violations

The complaint urges the Austrian authority to investigate OpenAI's handling of personal data to ensure compliance with GDPR. It also demands that OpenAI disclose individuals' personal data upon request and seeks imposition of an "effective, proportionate, dissuasive, administrative fine. The potential consequences of GDPR violations are significant, with penalties amounting to up to 4% of a company's global revenue. OpenAI's response to the allegations remains pending, and the company faces scrutiny from other European regulators as well. Last year, Italy's data protection authority temporarily banned ChatGPT's operations in the country over similar GDPR concerns, following which the European Data Protection Board established a task force to coordinate efforts among national privacy regulators regarding ChatGPT. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Copilot Workspace Is GitHub's Take On AI-Powered Software Engineering

By: BeauHD
30 April 2024 at 09:00
An anonymous reader quotes a report from TechCrunch: Ahead of its annual GitHub Universe conference in San Francisco early this fall, GitHub announced Copilot Workspace, a dev environment that taps what GitHub describes as "Copilot-powered agents" to help developers brainstorm, plan, build, test and run code in natural language. Jonathan Carter, head of GitHub Next, GitHub's software R&D team, pitches Workspace as somewhat of an evolution of GitHub's AI-powered coding assistant Copilot into a more general tool, building on recently introduced capabilities like Copilot Chat, which lets developers ask questions about code in natural language. "Through research, we found that, for many tasks, the biggest point of friction for developers was in getting started, and in particular knowing how to approach a [coding] problem, knowing which files to edit and knowing how to consider multiple solutions and their trade-offs," Carter said. "So we wanted to build an AI assistant that could meet developers at the inception of an idea or task, reduce the activation energy needed to begin and then collaborate with them on making the necessary edits across the entire corebase." Given a GitHub repo or a specific bug within a repo, Workspace -- underpinned by OpenAI's GPT-4 Turbo model -- can build a plan to (attempt to) squash the bug or implement a new feature, drawing on an understanding of the repo's comments, issue replies and larger codebase. Developers get suggested code for the bug fix or new feature, along with a list of the things they need to validate and test that code, plus controls to edit, save, refactor or undo it. The suggested code can be run directly in Workspace and shared among team members via an external link. Those team members, once in Workspace, can refine and tinker with the code as they see fit. Perhaps the most obvious way to launch Workspace is from the new "Open in Workspace" button to the left of issues and pull requests in GitHub repos. Clicking on it opens a field to describe the software engineering task to be completed in natural language, like, "Add documentation for the changes in this pull request," which, once submitted, gets added to a list of "sessions" within the new dedicated Workspace view. Workspace executes requests systematically step by step, creating a specification, generating a plan and then implementing that plan. Developers can dive into any of these steps to get a granular view of the suggested code and changes and delete, re-run or re-order the steps as necessary. "Since developers spend a lot of their time working on [coding issues], we believe we can help empower developers every day through a 'thought partnership' with AI," Carter said. "You can think of Copilot Workspace as a companion experience and dev environment that complements existing tools and workflows and enables simplifying a class of developer tasks ... We believe there's a lot of value that can be delivered in an AI-native developer environment that isn't constrained by existing workflows."

Read more of this story at Slashdot.

OpenAI’s GPT-4 Can Autonomously Exploit 87% of One-Day Vulnerabilities, Study Finds – Source: www.techrepublic.com

openai’s-gpt-4-can-autonomously-exploit-87%-of-one-day-vulnerabilities,-study-finds-–-source:-wwwtechrepublic.com

Source: www.techrepublic.com – Author: Fiona Jackson The GPT-4 large language model from OpenAI can exploit real-world vulnerabilities without human intervention, a new study by University of Illinois Urbana-Champaign researchers has found. Other open-source models, including GPT-3.5 and vulnerability scanners, are not able to do this. A large language model agent — an advanced system based […]

La entrada OpenAI’s GPT-4 Can Autonomously Exploit 87% of One-Day Vulnerabilities, Study Finds – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Tech CEOs Altman, Nadella, Pichai and Others Join Government AI Safety Board Led by DHS’ Mayorkas

29 April 2024 at 21:49

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s critical services from “AI-related disruptions.”

The post Tech CEOs Altman, Nadella, Pichai and Others Join Government AI Safety Board Led by DHS’ Mayorkas appeared first on SecurityWeek.

In Race To Build AI, Tech Plans a Big Plumbing Upgrade

By: msmash
29 April 2024 at 17:25
If 2023 was the tech industry's year of the A.I. chatbot, 2024 is turning out to be the year of A.I. plumbing. From a report: It may not sound as exciting, but tens of billions of dollars are quickly being spent on behind-the-scenes technology for the industry's A.I. boom. Companies from Amazon to Meta are revamping their data centers to support artificial intelligence. They are investing in huge new facilities, while even places like Saudi Arabia are racing to build supercomputers to handle A.I. Nearly everyone with a foot in tech or giant piles of money, it seems, is jumping into a spending frenzy that some believe could last for years. Microsoft, Meta, and Google's parent company, Alphabet, disclosed this week that they had spent more than $32 billion combined on data centers and other capital expenses in just the first three months of the year. The companies all said in calls with investors that they had no plans to slow down their A.I. spending. In the clearest sign of how A.I. has become a story about building a massive technology infrastructure, Meta said on Wednesday that it needed to spend billions more on the chips and data centers for A.I. than it had previously signaled. "I think it makes sense to go for it, and we're going to," Mark Zuckerberg, Meta's chief executive, said in a call with investors. The eye-popping spending reflects an old parable in Silicon Valley: The people who made the biggest fortunes in California's gold rush weren't the miners -- they were the people selling the shovels. No doubt Nvidia, whose chip sales have more than tripled over the last year, is the most obvious A.I. winner. The money being thrown at technology to support artificial intelligence is also a reminder of spending patterns of the dot-com boom of the 1990s. For all of the excitement around web browsers and newfangled e-commerce websites, the companies making the real money were software giants like Microsoft and Oracle, the chipmaker Intel, and Cisco Systems, which made the gear that connected those new computer networks together. But cloud computing has added a new wrinkle: Since most start-ups and even big companies from other industries contract with cloud computing providers to host their networks, the tech industry's biggest companies are spending big now in hopes of luring customers.

Read more of this story at Slashdot.

How to Turn Off (or Avoid) LinkedIn's AI Features

29 April 2024 at 17:30

Like it or not, LinkedIn is still one of the best ways to search for jobs online. But since 2023, the site has been experimenting with generative AI, making it possible to get AI help with finding new jobs, writing messages, connecting with others, and building your profile and job descriptions. Some users are even seeing AI prompts showing up under every post.

While posed as helpful, that kind of AI integration can get intrusive fast, as evidenced by comments asking how to turn LinkedIn’s AI off under posts advertising it. If you’d rather keep your online recruitment and job searches as human-powered as possible, here’s a quick breakdown of LinkedIn’s AI features and which ones you can turn off.

Wait, why doesn’t my LinkedIn have AI?

LinkedIn’s AI integration is pretty ubiquitous across the site, but there’s a catch: it’s reserved for Premium users. That means free users don’t have to lift a finger if they want to skip AI on LinkedIn. They’ll still see the occasional ad recommending they buy Premium to access a certain AI feature, but Premium ads aren’t exactly a new thing for LinkedIn.

If you do pay for Premium, your AI integration is going to be a bit harder to ignore–LinkedIn considers it part of your subscription, so it’s not going to want you to turn off these paid features.

LinkedIn currently uses AI in jobs pages, its recruiter tools, under posts, and in most text boxes. Some but not all of these can be turned off, and more annoyingly, the AI features you have access to differs across Premium tiers.

Where does LinkedIn use AI?

There are four areas where LinkedIn’s AI integration is most prevalent. The first is on job listings.

AI on a LinkedIn job listing
Credit: LinkedIn

With the Career tier of Premium, which I signed up for a free trial of while writing this article, job listings will now show prompts for LinkedIn’s AI chatbot underneath the job description. These include questions like “Am I a good fit for this job?” and “How can I best position myself for the job?” Answers to these usually read like summaries of either your job profile or the job description, while “Tell me more about [employer]” largely summarizes the company’s LinkedIn page.

An AI-assisted search in LinkedIn Recruiter
Credit: LinkedIn

The second is in LinkedIn Recruiter, where users can run AI-assisted candidate searches, get help filling out fields in projects, and send AI-assisted messages. These features require an enterprise level LinkedIn Recruiter subscription, so I wasn’t able to test them for this article. Note that LinkedIn Premium's Recruiter Lite tier does not get access to these tools.

AI on LinkedIn's About Page
Credit: LinkedIn

Premium users will also find AI in most of LinkedIn’s text boxes as well as on their profile. Here, LinkedIn will offer to help draft messages, posts, your profile’s headline or about pages. An odd quirk: Sales Navigator Core and Recruiter Lite packages, despite their higher cost compared to the Career and Business tiers, do not have access to AI message drafts.

AI under a post on the LinkedIn feed
Credit: LinkedIn

Perhaps the most visible of LinkedIn’s AI features are the “AI takeaways on feed posts.” On occasion, these will show up next to sparkle icons while browsing your feed, and will suggest questions related to the post. Clicking on them will open LinkedIn's AI chatbot and ask the question.

How to Turn off LinkedIn AI

The bad news is that most of LinkedIn’s AI features can’t be toggled off, so your best bet is to only sign up for the Premium tier with the features you want. A short list of available AI features is visible when signing up. Once you’ve signed up, you can double check which AI features you have access to by clicking the “See your Premium features” tab in the site’s top-left corner.

That said, there are a couple of steps you can take to make AI less prevalent on your feed. The most direct way to disable LinkedIn AI is in LinkedIn recruiter, where the ability to send AI-assisted messages can be turned off on both an admin and seat level.

To turn off AI-assisted messages in LinkedIn Recruiter’s admin tools, hover over your profile on your Recruiter homepage and click Product Settings. Navigate to Company Settings > Preferences in the left rail and click Edit under Enable AI-assisted message auto-draft. Toggle AI-assisted messages Off and click save.

To turn off AI-assisted messages on Recruiter’s seat level, hover over your profile on your Recruiter homepage, select Product Setting from the dropdown menu, then click Messaging under My Account settings on the left rail. Click Edit under Enable AI-assisted auto-draft, toggle the feature off, and click Save.

All other users can easily ignore LinkedIn’s AI-assisted messages, even if they can’t outright disable them. That’s because AI messages are currently only visible when clicking Message either in the Meet the hiring team section of the jobs page or in the introduction section of another user’s profile. Messages made via the Messaging window in the bottom-right corner will not show the Write with AI prompt.

Sadly, there is no way to keep the Write with AI prompt from appearing when writing a new post or editing your profile, so it’s important to know what it looks like to avoid accidentally clicking into it.

AI in LinkedIn Profile
Credit: LinkedIn

When editing your profile's Headline or About section, the Write with AI box will appear underneath your text box with a gold sparkle next to it and a Premium tag to the right. Avoid clicking it to keep from using the AI, but don’t worry if you do accidentally click it. If you don’t like what the AI has suggested, you can click the Revert button to undo its changes and the Thumbs Down button to mark the suggestion as bad.

AI on a LinkedIn post draft
Credit: LinkedIn

It’s a bit easier to ignore AI integration on LinkedIn posts, as the Rewrite with AI button will be grayed out until you’ve already written a few lines of text. If you do accidentally click it, click the Undo button to get rid of the changes to your text. You’ll also still be able to give the AI-rewrite either a thumbs up or thumbs down.

As for the AI prompts on job listings or the AI takeaways on posts in your feed? The best way to avoid them is simply to not sign up for Premium.

OpenAI's GPT May Be Coming to iOS 18

29 April 2024 at 16:00

Apple is unquestionably late to the AI party. While Microsoft and Google have rolled out and integrated proprietary generative AI tech into their massive platforms, Apple has remained quiet on the trend—and the silence is surprising, considering the heat AI has generated, and the fact that Apple is one of the world's most valuable tech companies. Hell, even Meta is all-in on adding intrusive AI features to its products.

But if rumor and speculation are to be believed, Apple is ready to make some noise. The company is widely expected to roll out major AI features as part of its new suite of software updates this year, including iOS 18. We don't even need to turn to the rumor mill or trust unsubstantiated claims to infer this: Apple researchers have already publicized much of their progress on AI, such as work on the company's proprietary AI model, an AI image editor, and an AI image animator.

Even with all this work done in-house, Apple might not have the resources to pull of all its upcoming AI features together without an assist. According to Bloomberg's Mark Gurman, the company is currently in talks to outsource some of its AI processing needs to OpenAI and its generative AI technology. If a deal were to go through, Apple could use OpenAI's GPT models to run a chatbot, like ChatGPT, in iOS, among other new features.

This isn't the first time the company has approached OpenAI about such a deal, nor is it the first time Apple has looked to a third-party for AI processing. We know, for example, the company is in discussions with Google to license Gemini for some of its AI ventures. It seems Apple is still exploring its options for who to partner with, and could even go with another party altogether.

You would think the company would be a bit more concerned about the timing of these deals, however: WWDC is just over two months away, and that's when all eyes will be on Apple to see what the company has been cooking in the AI department. ChatGPT launched at the end of 2022, kicking off this AI frenzy; Apple will be joining the party a year and a half late, and the tech world will be taking note of how much (or how little) the company is doing to embrace AI in the near-term.

It's possible that Apple is hedging its bets until then, seeing how much they can handle on their own before dedicating itself to outsourcing AI process. If the company can power an AI-upgraded Siri on its own, on-device, that would be much better for them than relying on Google or OpenAI's tech. Anything outsourced to other companies will likely need to be handled in the cloud, which decreases their security. On-device AI would keep your information restricted to your iPhone, while cloud-based AI could leave your data exposed to the eyes of Google, OpenAI, or whoever else Apple may partner with.

❌
❌