Reading view

There are new articles available, click to refresh the page.

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities

Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper. "Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way. Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection. Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...? Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet. Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task. "But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."

Read more of this story at Slashdot.

'Openwashing'

An anonymous reader quotes a report from The New York Times: There's a big debate in the tech world over whether artificial intelligence models should be "open source." Elon Musk, who helped found OpenAI in 2015, sued the startup and its chief executive, Sam Altman, on claims that the company had diverged from its mission of openness. The Biden administration is investigating the risks and benefits of open source models. Proponents of open source A.I. models say they're more equitable and safer for society, while detractors say they are more likely to be abused for malicious intent. One big hiccup in the debate? There's no agreed-upon definition of what open source A.I. actually means. And some are accusing A.I. companies of "openwashing" -- using the "open source" term disingenuously to make themselves look good. (Accusations of openwashing have previously been aimed at coding projects that used the open source label too loosely.) In a blog post on Open Future, a European think tank supporting open sourcing, Alek Tarkowski wrote, "As the rules get written, one challenge is building sufficient guardrails against corporations' attempts at 'openwashing.'" Last month the Linux Foundation, a nonprofit that supports open-source software projects, cautioned that "this 'openwashing' trend threatens to undermine the very premise of openness -- the free sharing of knowledge to enable inspection, replication and collective advancement." Organizations that apply the label to their models may be taking very different approaches to openness. [...] The main reason is that while open source software allows anyone to replicate or modify it, building an A.I. model requires much more than code. Only a handful of companies can fund the computing power and data curation required. That's why some experts say labeling any A.I. as "open source" is at best misleading and at worst a marketing tool. "Even maximally open A.I. systems do not allow open access to the resources necessary to 'democratize' access to A.I., or enable full scrutiny," said David Gray Widder, a postdoctoral fellow at Cornell Tech who has studied use of the "open source" label by A.I. companies.

Read more of this story at Slashdot.

Procedural Artificial Narrative using Gen AI for Turn-Based Video Games

"This research introduces Procedural Artificial Narrative using Generative AI (PANGeA), a structured approach for leveraging large language models (LLMs), guided by a game designer's high-level criteria, to generate narrative content for turn-based role-playing video games (RPGs)."

Full abstract: "This research introduces Procedural Artificial Narrative using Generative AI (PANGeA), a structured approach for leveraging large language models (LLMs), guided by a game designer's high-level criteria, to generate narrative content for turn-based role-playing video games (RPGs). Distinct from prior applications of LLMs used for video game design, PANGeA innovates by not only generating game level data (which includes, but is not limited to, setting, key items, and non-playable characters (NPCs)), but by also fostering dynamic, free-form interactions between the player and the environment that align with the procedural game narrative. The NPCs generated by PANGeA are personality-biased and express traits from the Big 5 Personality Model in their generated responses. PANGeA addresses challenges behind ingesting free-form text input, which can prompt LLM responses beyond the scope of the game narrative. A novel validation system that uses the LLM's intelligence evaluates text input and aligns generated responses with the unfolding narrative. Making these interactions possible, PANGeA is supported by a server that hosts a custom memory system that supplies context for augmenting generated responses thus aligning them with the procedural narrative. For its broad application, the server has a REST interface enabling any game engine to integrate directly with PANGeA, as well as an LLM interface adaptable with local or private LLMs. PANGeA's ability to foster dynamic narrative generation by aligning responses with the procedural narrative is demonstrated through an empirical study and ablation test of two versions of a demo game. These are, a custom, browser-based GPT and a Unity demo. As the results show, PANGeA holds potential to assist game designers in using LLMs to generate narrative-consistent content even when provided varied and unpredictable, free-form text input." Buongiorno, S., Klinkert, L. J., Chawla, T., Zhuang, Z., & Clark, C. (2024). PANGeA: Procedural Artificial Narrative using Generative AI for Turn-Based Video Games. arXiv preprint arXiv:2404.19721.

OpenAI putting ‘shiny products’ above safety, says departing researcher

Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o

A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.

Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.

Continue reading...

💾

© Photograph: Michael Dwyer/AP

💾

© Photograph: Michael Dwyer/AP

How AI turbocharges your threat hunting game – Source: www.cybertalk.org

how-ai-turbocharges-your-threat-hunting-game-–-source:-wwwcybertalk.org

Source: www.cybertalk.org – Author: slandau EXECUTIVE SUMMARY: Over 90 percent of organizations consider threat hunting a challenge. More specifically, seventy-one percent say that both prioritizing alerts to investigate and gathering enough data to evaluate a signal’s maliciousness can be quite difficult. Threat hunting is necessary simply because no cyber security protections are always 100% effective. […]

La entrada How AI turbocharges your threat hunting game – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

SugarGh0st RAT variant, targeted AI attacks – Source: www.cybertalk.org

sugargh0st-rat-variant,-targeted-ai-attacks-–-source:-wwwcybertalk.org

Source: www.cybertalk.org – Author: slandau EXECUTIVE SUMMARY: Cyber security experts have recently uncovered a sophisticated cyber attack campaign targeting U.S-based organizations that are involved in artificial intelligence (AI) projects. Targets have included organizations in academia, private industry and government service. Known as UNK_SweetSpecter, this campaign utilizes the SugarGh0st remote access trojan (RAT) to infiltrate networks. […]

La entrada SugarGh0st RAT variant, targeted AI attacks – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

An Analysis of AI usage in Federal Agencies

Existing Regulations As part of its guidance to agencies in the AI Risk Management (AI RMF), the National Institute of Standards and Technology (NIST) recommends that an organization must have an inventory of its AI systems and models. An inventory is necessary from the perspective of risk identification and assessment, monitoring and auditing, and governance […]

The post An Analysis of AI usage in Federal Agencies appeared first on Security Boulevard.

OpenAI will use Reddit posts to train ChatGPT under new deal

An image of a woman holding a cell phone in front of the Reddit logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

Enlarge (credit: Getty)

Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.

Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.

The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.

Read 8 remaining paragraphs | Comments

Slack users horrified to discover messages used for AI training

Slack users horrified to discover messages used for AI training

Enlarge (credit: Tim Robberts | DigitalVision)

After launching Slack AI in February, Slack appears to be digging its heels in, defending its vague policy that by default sucks up customers' data—including messages, content, and files—to train Slack's global AI models.

According to Slack engineer Aaron Maurer, Slack has explained in a blog that the Salesforce-owned chat service does not train its large language models (LLMs) on customer data. But Slack's policy may need updating "to explain more carefully how these privacy principles play with Slack AI," Maurer wrote on Threads, partly because the policy "was originally written about the search/recommendation work we've been doing for years prior to Slack AI."

Maurer was responding to a Threads post from engineer and writer Gergely Orosz, who called for companies to opt out of data sharing until the policy is clarified, not by a blog, but in the actual policy language.

Read 34 remaining paragraphs | Comments

OpenAI's Long-Term AI Risk Team Has Disbanded

An anonymous reader shares a report: In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI's chief scientist and one of the company's cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI's "superalignment team" is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday's news that Sutskever was leaving the company, and the resignation of the team's other colead. The group's work will be absorbed into OpenAI's other research efforts. Sutskever's departure made headlines because although he'd helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board. Hours after Sutskever's departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team's other colead, posted on X that he had resigned.

Read more of this story at Slashdot.

Sony Music opts out of AI training for its entire catalog

picture of Beyonce who is a Sony artist

Enlarge / The Sony Music letter expressly prohibits artificial intelligence developers from using its music — which includes artists such as Beyoncé. (credit: Kevin Mazur/WireImage for Parkwood via Getty Images)

Sony Music is sending warning letters to more than 700 artificial intelligence developers and music streaming services globally in the latest salvo in the music industry’s battle against tech groups ripping off artists.

The Sony Music letter, which has been seen by the Financial Times, expressly prohibits AI developers from using its music—which includes artists such as Harry Styles, Adele and Beyoncé—and opts out of any text and data mining of any of its content for any purposes such as training, developing or commercializing any AI system.

Sony Music is sending the letter to companies developing AI systems including OpenAI, Microsoft, Google, Suno, and Udio, according to those close to the group.

Read 12 remaining paragraphs | Comments

"this rat borg collective ended up [performing] better than single rats"

Conscious Ants and Human Hives by Peter Watts has an entertaining take on Neuralink.

In breif, Watts doubts Neuralink could provide "faster internet" in the sense Neuralink markets to investors, but other darker markets exist.. Around fiction, if you've read Blindsight and Echopraxia then The Colonel touches amusizingly employs Watts perspective on hiveminds. "Attack of the Hope Police: Delusional Optimism at the End of the World?" is lovely latlk too. Also "The Collapse Is Coming. Will Humanity Adapt?" by Peter Watts.

How I upgraded my water heater and discovered how bad smart home security can be

The bottom half of a tankless water heater, with lots of pipes connected, in a tight space

Enlarge / This is essentially the kind of water heater the author has hooked up, minus the Wi-Fi module that led him down a rabbit hole. Also, not 140-degrees F—yikes. (credit: Getty Images)

The hot water took too long to come out of the tap. That is what I was trying to solve. I did not intend to discover that, for a while there, water heaters like mine may have been open to anybody. That, with some API tinkering and an email address, a bad actor could possibly set its temperature or make it run constantly. That’s just how it happened.

Let’s take a step back. My wife and I moved into a new home last year. It had a Rinnai tankless water heater tucked into a utility closet in the garage. The builder and home inspector didn't say much about it, just to run a yearly cleaning cycle on it.

Because it doesn’t keep a big tank of water heated and ready to be delivered to any house tap, tankless water heaters save energy—up to 34 percent, according to the Department of Energy. But they're also, by default, slower. Opening a tap triggers the exchanger, heats up the water (with natural gas, in my case), and the device has to push it through the line to where it's needed.

Read 38 remaining paragraphs | Comments

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>

Enlarge / Screenshot from the documentary Who Is Bobby Kennedy? (credit: whoisbobbykennedy.com)

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy's documentary and making it harder to support Kennedy's candidacy. This allegedly has caused "substantial donation losses," while also violating the free speech rights of Kennedy, his supporters, and his film's production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms' automated spam filters.

Read 25 remaining paragraphs | Comments

Hugging Face Is Sharing $10 Million Worth of Compute To Help Beat the Big AI Companies

Kylie Robison reports via The Verge: Hugging Face, one of the biggest names in machine learning, is committing $10 million in free shared GPUs to help developers create new AI technologies. The goal is to help small developers, academics, and startups counter the centralization of AI advancements. [...] Delangue is concerned about AI startups' ability to compete with the tech giants. Most significant advancements in artificial intelligence -- like GPT-4, the algorithms behind Google Search, and Tesla's Full Self-Driving system -- remain hidden within the confines of major tech companies. Not only are these corporations financially incentivized to keep their models proprietary, but with billions of dollars at their disposal for computational resources, they can compound those gains and race ahead of competitors, making it impossible for startups to keep up. Hugging Face aims to make state-of-the-art AI technologies accessible to everyone, not just the tech giants. [...] Access to compute poses a significant challenge to constructing large language models, often favoring companies like OpenAI and Anthropic, which secure deals with cloud providers for substantial computing resources. Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU. The shared GPUs are accessible to multiple users or applications concurrently, eliminating the need for each user or application to have a dedicated GPU. ZeroGPU will be available via Hugging Face's Spaces, a hosting platform for publishing apps, which has over 300,000 AI demos created so far on CPU or paid GPU, according to the company. Access to the shared GPUs is determined by usage, so if a portion of the GPU capacity is not actively utilized, that capacity becomes available for use by someone else. This makes them cost-effective, energy-efficient, and ideal for community-wide utilization. ZeroGPU uses Nvidia A100 GPU devices to power this operation -- which offer about half the computation speed of the popular and more expensive H100s. "It's very difficult to get enough GPUs from the main cloud providers, and the way to get them -- which is creating a high barrier to entry -- is to commit on very big numbers for long periods of times," Delangue said. Typically, a company would commit to a cloud provider like Amazon Web Services for one or more years to secure GPU resources. This arrangement disadvantages small companies, indie developers, and academics who build on a small scale and can't predict if their projects will gain traction. Regardless of usage, they still have to pay for the GPUs. "It's also a prediction nightmare to know how many GPUs and what kind of budget you need," Delangue said.

Read more of this story at Slashdot.

You'll Soon Be Able to Use Gemini to Search Your Google Photos

At I/O 2024, Google announced a great new AI feature for Google Photos, simply called Ask Photos. With Ask Photos, you can treat the app like a chatbot, say, Gemini or ChatGPT: You can request a specific photo in your library, or ask the app a general question about your photos, and the AI will sift through your entire library to both find the photos and the answers to your queries.

How does Ask Photos work?

When you ask Ask Photos a question, the bot will make a detailed search of your library on your behalf: It first identifies relevant keywords in your query, such as locations, people, and dates, as well as longer phrases, such as "summer hike in Maine."

After that, Ask Photos will study the search results, and decide which ones are most relevant to your original query. Gemini's multimodal abilities allow it to process the elements of each photo, including text, subjects, and action, which helps it decide whether that image is pertinent to the search. Once Ask Photos picks the relevant photos and videos for your query, it combines them into a helpful response.

Google says your personal data in Google Photos is never used for ads and human reviewers won't see the conversions and personal data in Ask Photos, except, "in rare cases to address abuse or harm." The company also said they don't train their other AI products with this Google Photos data, including other Gemini models and services.

What can you do with Ask Photos?

Of course, Ask Photos is an ideal way to quickly find specific photos you're looking for. You could ask, "Show me the best photos from my trip to Spain last year,” and Google Photos will pull up all your photos from that vacation, along with a text summary of its results. You can use the feature to arrange these photos in a new album, or generate captions for a social media post.

However, the more interesting use here is for finding answers to questions contained in your photos without having to scroll through those photos yourself. Google shared a great example during its presentation: If you ask the app, "What is my license plate number?" it will identify your car out of all the photos of cars in your library. It will not only return a picture of your car with your license plate, but will answer the original question itself. If you're offering advice to a friend about the best restaurants to try in a city you've been to, you could ask, "What restaurants did we go to in New York last year?" and Ask Photos will return both the images of the restaurants in your library, as well as a list you can share.

When will Ask Photos be available?

Google says the experimental feature is rolling out in the coming months, but no specific timeframe was given.

Google Is Bringing More Generative AI to Search

AI has been the dominating force in this year's Google I/O—and one of the biggest announcements that Google made was a new Gemini model customized for Google Search. Over the next few weeks, Google will be rolling out a few AI features in Search, including AI Overviews, AI-organized search results, and search with video.

AI Overviews

When you're searching for something on Google and want a quick answer, AI Overviews come into play. The feature gives you an AI-generated overview of the topic you're searching for, and cites its sources with links you can click through for further reading. Google was testing AI Overviews in Search Labs, but has been rolling out the feature to everyone in the U.S. this week.

At a later date, you'll be able to adjust your AI Overview with options to simplify some of the terminology used, and even break down results in more detail. Ideally, you could turn a complex search into something accessible for anyone. Google is also pushing a feature that lets you stack multiple queries into one search: The company used the example of “find the best yoga or pilates studios in Boston and show me details on their intro offers, and walking time from Beacon Hill," and AI Overviews returned a complete result.

As with other AI-generated models, you can also use this feature to put together plans of action, including creating meal plans and prepping for a trip.

AI-organized results

In addition to AI Overviews, Google Search will soon be using generative AI to create an "AI-organized results page." The idea is the AI will intelligently sort your most relevant options for you, so you won't have to do as much digging around the web. So when you're searching for something like, say, restaurants for a birthday dinner, Google's AI will suggest the best options it can find, organized beneath AI-generated headlines. AI-organized results will be available for English searches in the U.S.

Search with video

Google previously rolled out Circle to Search, which lets you circle elements of your screen to start a Google search for that particular subject. But soon, you'll also be able to start a Google search with video. The company gave an example of a customer who bought a used record player whose needle wasn't working properly. The customer took a video of the issue, describing it out loud, and sent it along as a Google search. Google analyzed the issue and returned a relevant result, as if the user had simply typed out the problem in detail.

Search with video will soon be available for Search Labs users in English in the U.S. Google will expand the feature to more users in the coming months.

What’s up with ChatGPT’s new sexy persona? | Arwa Mahdawi

OpenAI’s updated chatbot GPT-4o is weirdly flirtatious, coquettish and sounds like Scarlett Johansson in Her. Why?

“Any sufficiently advanced technology is indistinguishable from magic,” Arthur C Clarke famously said. And this could certainly be said of the impressive OpenAI update to ChatGPT, called GPT-4o, which was released on Monday. With the slight caveat that it felt a lot like the magician was a horny 12-year-old boy who had just watched the Spike Jonze movie Her.

If you aren’t up to speed on GPT-4o (the o stands for “omni”) it’s basically an all-singing, all-dancing, all-seeing version of the original chatbot. You can now interact with it the same way you’d interact with a human, rather than via text-based questions. It can give you advice, it can rate your jokes, it can describe your surroundings, it can banter with you. It sounds human. “It feels like AI from the movies,” OpenAI CEO Sam Altman said in a blog post on Monday. “Getting to human-level response times and expressiveness turns out to be a big change.”

Continue reading...

💾

© Photograph: Warner Bros./Sportsphoto/Allstar

💾

© Photograph: Warner Bros./Sportsphoto/Allstar

These Are the Biggest Differences Between Google Gemini and ChatGPT

AI chatbots are more popular than ever, and there are plenty of solid options out there to choose from beyond OpenAI's ChatGPT. One particularly strong competitor is Google's Gemini AI, which used to be called Google Bard. This AI chatbot pulls information from the internet and runs off the latest Gemini language model created by Google.

What is Google Gemini?

Bard, or Gemini as the company now calls it, is Google's answer to ChatGPT. It's an AI chatbot designed to respond to various queries and tasks, all while being plugged into Google's search engine and receiving frequent updates. Like most other chatbots, including ChatGPT, Gemini can answer math problems and help with writing articles and documents, as well as with most other tasks you would expect a generative AI bot to do.

What happened to Google Bard?

Google Bard is now Google Gemini
Credit: Google / Joshua Hawkins

Nothing happened—Google just changed the name. Bard is now Gemini, and Gemini is Google's home for all things AI. The company says it wanted to bring everything into one easy-to-follow ecosystem, which is why it felt the name change was important. You can still access Gemini through the old bard.google.com system, but it will now redirect you to gemini.google.com.

How does Gemini work?

Much like ChatGPT, Gemini is powered by a large language model (LLM) and is designed to respond with reasonable and human-like answers to your queries and requests. Previously, Gemini used Google's PaLM 2 language model, but Google has since released an update that adds Gemini 1.5 Flash and Gemini 1.5 Pro models, the search giant's most complex and capable language models yet. Running Gemini with multiple language models has allowed Google to see the bot in action in several different ways. Gemini can be accessed on any device by visiting the chatbot's website, just like ChatGPT, and is also available on Android and iPhones via the Gemini app.


Recommended AI courses:


Who can access Google Gemini?

Gemini is currently available to the general public. Google is still working on the AI chatbot, and hopes to continue improving it. As such, any responses, queries, or tasks submitted to Gemini can be reviewed by Google engineers to help the AI learn more from the questions that you're asking.

To start using Gemini, simply head over to gemini.google.com and sign in. Users who subscribe to Gemini Advanced can utilize the newest and most powerful versions of the AI language model. (More on that later.)

What languages does Gemini support?

Gemini 1.0 Pro currently supports over 40 languages. Google hasn't said yet if it plans to add more language support to the chatbot, but a Google support doc notes that it currently supports: Arabic, Bengali, Bulgarian, Chinese (Simplified / Traditional), Croatian, Czech, Danish, Dutch, English, Estonian, Farsi, Finnish, French, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Kannada, Korean, Latvian, Lithuanian, Malayalam, Marathi, Norwegian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swahili, Swedish, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, and Vietnamese.

Gemini 1.5 Pro supports 35 languages and is available in over 150 different countries and territories. The supported languages include Arabic, Bulgarian, Chinese (Simplified / Traditional), Croatian, Czech, Danish, Dutch, English, Estonian, Farsi, Finnish, French, German, Greek, Hebrew, Hungarian, Indonesian, Italian, Japanese, Korean, Latvian, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swahili, Swedish, Thai, Turkish, Ukrainian, and Vietnamese.

(Note: At the time of this article's writing, Google Gemini Advanced is only optimized for English. However, Google says it should still work with any languages Gemini supports.)

What features does Gemini offer?

Like ChatGPT, Gemini can answer basic questions, help with coding, and solve complex mathematic equations. Additionally, Google added support for multimodal search in July, allowing users to input pictures as well as text into conversations. This, along with the chatbot's other capabilities, enables it to complete reverse image searches. Google can also include images in its answers, which are pulled from the search giant's online results.

Google also previously added the ability to generate images in Gemini using its Imagen model. You can take advantage of this new feature by telling the bot to "create an image." This makes the chatbot more competitive with OpenAI, which also offers image generation through DALL-E.

During Google I/O 2024, Google also showed off plans to expand that multimodal support for Gemini to include video and voice, allowing you to chat with the AI chatbot in real-time, similar to what we're already seeing with ChatGPT's new GPT-4o model.

Is Gemini connected to the internet?

Yes, Google Gemini is connected to the internet and is trained on the latest and most up-to-date information found online. This is obviously a nice advantage over ChatGPT, which just added full access to the internet back in September, and only for paid users who subscribe to its GPT-4 model.

How accurate is Google Gemini?

Now that the chatbot is using Gemini 1.0 Pro and Gemini 1.5 Pro, it's expected to be one of the most accurate chatbots available on the web right now. However, past experiences with Gemini have shown that the bot is likely to hallucinate or take credit for information that it found via Google searches. This is a problem that Google has been working to fix, and the company has managed to improve the results and how they are handled.

However, like any chatbot, Gemini is still capable of creating information that is untrue or plagiarized. As such, it is always recommended you double-check any information that chatbots like Gemini provide, to ensure it is original and accurate.

Is Gemini free to use?

Gemini is currently free to use, but Google also offers a subscription-based plan that allows you to take advantage of its best AI yet, Gemini Advanced. The service is available as part of Google's new Google One AI Premium Plan, which currently runs for $19.99 a month, putting it on par with ChatGPT Plus. The advantage here, of course, is that you also get access to 2TB of storage in Google Drive, as well as access to Gemini in Gmail, Docs, Slides, Sheets, and more. This feature was previously known as Duet AI, but it has also been rounded up under the Gemini umbrella.

There's an app for that

Google also launched a dedicated Gemini mobile app for Android. iPhone users can access Gemini through the Google app on iOS. Currently, the Gemini mobile app is only available on select devices and only supports English in the U.S. However, Google plans to extend the available countries and languages the Gemini app supports in the future. Additionally, the mobile app supports many of the same functions as Google Assistant, and Google is positioning it to replace Assistant with Gemini in near future.

How does Gemini compare to ChatGPT?

Gemini is a solid competitor for ChatGPT, especially now that Gemini should return results more akin to those in GPT-4. The interface is very similar, and the functionality offered by both chatbots should handle most of the queries and tasks that you throw at either of them.

Even with Google's paid plan, Gemini is still a more accessible option, as its free models are more similar to GPT-4 than ChatGPT's free option is. That said, OpenAI is starting to roll out a version of GPT-4o to all users, even free ones, but it will have usage limits and isn't widely available yet.

For now, Gemini presents the fewest barriers to internet access, and can use Google as a search engine. When ChatGPT does connect to the internet, it utilizes Bing as a search engine instead of Google.

Google did share some information about how Gemini compares to GPT-4V, one of the latest versions of GPT-4, and said it actually achieves more accurate results in several fields. But as no trustworthy tests are yet available for how Gemini 1.5 Pro compares to GPT-4o, it's unclear exactly how the two newest models from Google and OpenAI compare when placed head to head. Google Gemini 1.5 Pro does offer a maximum context-token count of one million, so it can handle much longer context documents than ChatGPT can now. And Google isn't stopping there, as it plans to offer a Gemini version with support for two million context tokens, which it is already testing with developers.

Ultimately, it's hard to say exactly which one is better, as they both have their strengths. I'd recommend trying to complete whatever task you want to accomplish in both, and then seeing which one works best for your needs. Also, keep in mind that some of the most impressive features that Gemini and ChatGPT offer are not fully available yet. For its part, Google is working on other AI-driven systems, which it could possibly include in Gemini down the line. These include MusicLM, which uses AI to generate music, something the tech giant showed off during Google I/O 2024.

Google's Project Astra Is an AI Assistant That Can Respond to What It Sees

At I/O 2024, Google made lots of exciting AI announcements—but one that has everyone talking is Project Astra. Essentially, Project Astra is what Google is calling an "advanced seeing and talking responsive agent." This means that a future Google AI will be able to get context from what's around you and you can ask a question and get a response in real time. It's almost like an amped-up version of Google Lens.

Project Astra is being developed by Google's DeepMind team, which is on a mission to build AI that can responsibly benefit humanity; this project is just one of the ways it's doing so. Google says that Project Astra is built upon its Gemini 1.5 Pro, which has gained improvements in areas such as translation, coding, reasoning and more. As part of this project, Google says they've developed prototype AI agents that can process information even faster by continuously encoding video frames and combining video and speech input into a timeline of events. The company is also using their speech models to enhance how its AI agents sound, for a wider range of intonations.

Google released a two-part demo video to show off how Project Astra works. The first half of the video shows Project Astra running on a Google Pixel phone; the latter half shows the new AI running on a prototype glasses device.

In the demo video, we can see the user using their Pixel phone with a camera viewfinder open and moving their device around the room while asking the next-generation Gemini AI assistant, "Tell me when you see something that makes sound" and the AI responding by pointing out the speaker on the desk. Other examples in the video include asking what a part of the code on a computer screen does, what city neighborhood they're currently in and coming up with a band name for a dog and its toy tiger.

While it will be a long time before we see this next-generation AI from Project Astra coming to our daily lives, it's still quite cool to see what the future holds.

You Can Soon Make AI Videos With Google's New VideoFX Tool

As part of Google I/O, the company made several AI announcements, including a new experimental tool called VideoFX. With VideoFX, users can have a high-quality video generated just by typing in a prompt—and it's powered by a new AI model that Google calls Veo.

What is Google Veo?

To get the new VideoFX tool to work, Google's DeepMind team developed a new AI model called Veo. The AI model was specifically created with video generation in mind and has a deep understanding of natural language and visual semantics, meaning you could give it prompts such as "Drone shot along the Hawaii jungle coastline, sunny day" or "Alpacas wearing knit wool sweaters, graffiti background, sunglasses."

The Veo model is clearly Google's answer to OpenAI's Sora AI video generator. While both Veo and Sora can create realistic videos using AI, Sora's videos had a limit of 60 seconds. Meanwhile, Google says that Veo can generate 1080p videos that can go beyond a minute in length, although Google doesn't specify how much longer.

What is VideoFX?

Essentially, the new VideoFX tool takes the power of the new Veo AI model and puts it into an easy-to-use video editing tool. You'll be able to write up your prompts and they'll be turned into a video clip. The tool will also feature a Storyboard mode that you can use to create different shots, add music, and export your final video.

Google's VideoFX tool is currently being tested in a private preview in the U.S. The company hasn't said when the VideoFX tool will be publicly available beyond this preview.

How to sign up for VideoFX?

For those interested in trying the new Veo-powered tools in VideoFX, you can join the waitlist: Go to http://labs.google/trustedtester and submit your information. Google will be reviewing all submissions on a rolling basis.

Google unveils Veo, a high-definition AI video generator that may rival Sora

Still images taken from videos generated by Google Veo.

Enlarge / Still images taken from videos generated by Google Veo. (credit: Google / Benj Edwards)

On Tuesday at Google I/O 2024, Google announced Veo, a new AI video-synthesis model that can create HD videos from text, image, or video prompts, similar to OpenAI's Sora. It can generate 1080p videos lasting over a minute and edit videos from written instructions, but it has not yet been released for broad use.

Veo reportedly includes the ability to edit existing videos using text commands, maintain visual consistency across frames, and generate video sequences lasting up to and beyond 60 seconds from a single prompt or a series of prompts that form a narrative. The company says it can generate detailed scenes and apply cinematic effects such as time-lapses, aerial shots, and various visual styles

Since the launch of DALL-E 2 in April 2022, we've seen a parade of new image synthesis and video synthesis models that aim to allow anyone who can type a written description to create a detailed image or video. While neither technology has been fully refined, both AI image and video generators have been steadily growing more capable.

Read 9 remaining paragraphs | Comments

Senators Urge $32 Billion in Emergency Spending on AI After Finishing Yearlong Review

A bipartisan group of four senators led by Majority Leader Chuck Schumer is recommending that Congress spend at least $32 billion over the next three years to develop AI and place safeguards around it, writing in a report released Wednesday that the U.S. needs to "harness the opportunities and address the risks" of the quickly developing technology. AP: The group of two Democrats and two Republicans said in an interview Tuesday that while they sometimes disagreed on the best paths forward, it was imperative to find consensus with the technology taking off and other countries like China investing heavily in its development. They settled on a raft of broad policy recommendations that were included in their 33-page report. While any legislation related to AI will be difficult to pass, especially in an election year and in a divided Congress, the senators said that regulation and incentives for innovation are urgently needed.

Read more of this story at Slashdot.

How to Remove AI From Google Search

Google's AI heavy Google I/O keynote has ended, but Gemini has a long way to go before it can turn Google's AI dreams into realities. While many of Google's AI features are months down the line, the AI Overviews feature is already live for all US users.

Google is only going to be adding more AI features to the Search page going forward. This includes the ability to ask longer, more complex questions, or even to organize the entire Search page in different sections using AI. If that sounds like too much for you, there's something you can do about it.

Turn off AI in Google Search

While releasing all its new AI features, Google has also introduced something that will help you go back—way back. There's now a new, easy to miss button at the top of the search results page simply called "Web." If you switch to it, Google will only show you text links from websites, just like the good old days (although these can include sponsored ads).

The irony of needing to press a button called Web to get results for a web search is not lost on me. Nevertheless, it will be a useful feature for anyone who prefers the old-school approach to Google Search, the one that only showed you the top results from the web, made up of trusted sites.

The Web filter is rolling out on desktop and mobile search globally starting today and tomorrow, and you should see it in your searches soon. If you don't find it in the toolbar, click the More menu, and it should be there.

Web filter in Google Search.
Credit: Google Search Liaison (via X)

When you switch to the Web filter, your search results will also get rid of any kind of media or pull-out boxes. You won't see sections for images, videos, or Google News stories. Instead, you'll just see links (which themselves can point to YouTube videos, or news stories), according to Google Search Liaison's post on X.

Google has also confirmed to The Verge that the Web filter will stay like this, even as Google continues to add more AI features to the main page of Google Search.

There's no stopping AI

While the Web filter is a nice touch, it's not the default option, and you'll need to switch to it manually, all the time (like you do when you switch to the Images or Maps filter). This step has also made something else clear: Google is not offering a way to turn off AI search features in the default Search page. Perhaps we will eventually see Chrome extensions that can alter the Google Search page, but for now, the only escape Google Search's AI is to switch to the Web filter.

5 key takeaways for CISOs, RSA Conference 2024 – Source: www.cybertalk.org

5-key-takeaways-for-cisos,-rsa-conference-2024-–-source:-wwwcybertalk.org

Source: www.cybertalk.org – Author: slandau EXECUTIVE SUMMARY: Last week, over 40,000 business and cyber security leaders converged at the Moscone Center in San Francisco to attend the RSA Conference, one of the leading annual cyber security conferences and expositions worldwide, now in its 33rd year. Across four days, presenters, exhibitors and attendees discussed a wide […]

La entrada 5 key takeaways for CISOs, RSA Conference 2024 – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Navigating the New Frontier of AI-Driven Cybersecurity Threats

A few weeks ago, Best Buy revealed its plans to deploy generative AI to transform its customer service function. It’s betting on the technology to create “new and more convenient ways for customers to get the solutions they need” and to help its customer service reps develop more personalized connections with its consumers. By the […]

The post Navigating the New Frontier of AI-Driven Cybersecurity Threats appeared first on Security Boulevard.

The Best Custom GPTs to Make ChatGPT Even More Useful

As long as you've signed up for the ChatGPT Plus package ($20 a month), you can make use of custom GPTs—Generative Pre-trained Transformers. These are chatbots with a specific purpose in mind, whether it's vacation planning or scientific research, and there are a huge number of them to choose from.

They're not all high-quality, but several are genuinely useful; I've picked out our favorites below. To find them, just click Explore GPTs in the left-hand navigation bar in ChatGPT on the web. Once you've installed a particular GPT, you'll be able to get at it from the same navigation bar.

These GPTs have replaced plugins on the platform, and offer more focused experiences and features than the general ChatGPT bot. A lot of them also come with extra knowledge that ChatGPT doesn't have, and you can even have a go at creating your own: From the GPTs directory, click + Create up in the top right corner.

Kayak

Travel portal Kayak has its very own GPT, ready to answer any questions you might have regarding a particular destination or how to get there. Ask about the best time to visit a city, or about the cost of a flight somewhere, or some places you can get to on a particular budget. It's great for getting inspiration about a trip or working out the finer details, and when I tested it on my home city of Manchester, UK, the answers that it came up with were reassuringly accurate—it even correctly told me the price you'll currently pay for a beer.

Hot Mods

The Dall-E image generator (another OpenAI property) is built right into ChatGPT, so you get image creation and manipulation tools out of the box, but Hot Mods is a bit more specific: It takes pictures you've already got and turns them into something else. Add new elements, change the lighting or the vibe, swap out the background, turn photos into paintings and vice versa, and so on. The image transformations aren't always spot on—such is the nature of AI art—but you can have a lot of fun playing around with this.

ChatGPT GPTs
Tweak and transform existing images with Hot Mods. Credit: Lifehacker

Wine Sommelier

Want to know the perfect wine for a particular dish, event, or time of year? Of course you do, which is why Wine Sommelier can be so helpful. Put it to the test and it'll tell you which wines go well with chicken salad, which wines suit a vegan diet, and which wines will be well-received at a wedding. You can also quiz the GPT on related topics, like wine production processes or vineyards of note. Whether you're a wine expert or you want some tips on how to become one, the Wine Sommelier bot is handy to have around.

Canva

You might already be familiar with Canva's straightforward and intuitive graphic design tools on the web and mobile, and its official GPT enables you to get creative inside the ChatGPT interface as well. Everything from posters to flyers to social media posts is covered, and you can be as deliberate or as vague as you like when it comes to the designs Canva comes up with. The GPT is able to produce text for your projects as well as images, and will ask you questions if it needs more guidance about the graphics you're asking for.

SciSpace

SciSpace is an example of a GPT that brings a wealth of extra knowledge with it, above and beyond ChatGPT's own training data—it can dig into more than 278 million research papers, in fact, so you're able to ask it any question to get the latest academic thinking on a particular topic. Whether you want to know about potential links between exercise and heart health, or whale migration patterns, or anything else, SciSpace has you covered. You can also upload research papers and ask the bot to analyze and summarize them for you.

ChatGPT GPTs
Coloring Book Hero can create all kinds of designs for you. Credit: Lifehacker

Coloring Book Hero

This is a great idea: Get AI to produce custom coloring books for you, based around any idea or subject you like. Obviously you're going to need a printer as well (unless you're doing the coloring digitally), but there's a lot of fun to be had in playing around with designs and topics—you can even upload your own images and have them converted to coloring book format. Some of the usual AI weirdness does occasionally creep in to the pictures, but they're mostly spot on in terms of bold, black lines and big white spaces.

Wolfram

Wolfram Alpha is one of the best resources on the web, an outstanding collection of algorithms and knowledge, and the Wolfram GPT brings a lot of this usefulness into ChatGPT as well. This bot is perfect for complex math calculations and graphs, for conversions between different units, and for asking questions about any of the knowledge gathered by humanity so far. The breadth of the capabilities here is seriously impressive, and when needed the GPT is able to pull data and images from Wolfram Alpha seamlessly.

The Rise of AI and Blended Attacks: Key Takeaways from RSAC 2024

The 2024 RSA Conference can be summed up in two letters: AI. AI was everywhere. It was the main topic of more than 130 sessions. Almost every company with a booth in the Expo Hall advertised AI as a component in their solution. Even casual conversations with colleagues over lunch turned to AI. In 2023, … Continued

The post The Rise of AI and Blended Attacks: Key Takeaways from RSAC 2024 appeared first on DTEX Systems Inc.

The post The Rise of AI and Blended Attacks: Key Takeaways from RSAC 2024 appeared first on Security Boulevard.

Chief Scientist Ilya Sutskever leaves OpenAI six months after Altman ouster

An image Illya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati.

Enlarge / An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati. (credit: Ilya Sutskever / X)

On Tuesday evening, OpenAI Chief Scientist Ilya Sutskever announced that he is leaving the company he co-founded, six months after he participated in the coup that temporarily ousted OpenAI CEO Sam Altman. Jan Leike, a fellow member of Sutskever's Superalignment team, is reportedly resigning with him.

"After almost a decade, I have made the decision to leave OpenAI," Sutskever tweeted. "The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly."

Sutskever has been with the company since its founding in 2015 and is widely seen as one of the key engineers behind some of OpenAI's biggest technical breakthroughs. As a former OpenAI board member, he played a key role in the removal of Sam Altman as CEO in the shocking firing last November. While it later emerged that Altman's firing primarily stemmed from a power struggle with former board member Helen Toner, Sutskever sided with Toner and personally delivered the news to Altman that he was being fired on behalf of the board.

Read 6 remaining paragraphs | Comments

Project Astra Is Google's 'Multimodal' Answer to the New ChatGPT

At Google I/O today, Google introduced a "next-generation AI assistant" called Project Astra that can "make sense of what your phone's camera sees," reports Wired. It follows yesterday's launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it 'sees' through a smartphone camera or on a computer screen. It "also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness," notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices' cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. [...] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company's effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them. Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind's work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in. "A multimodal universal agent assistant is on the sort of track to artificial general intelligence," Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. "This is not AGI or anything, but it's the beginning of something."

Read more of this story at Slashdot.

The Biggest Differences Between Claude AI and ChatGPT

AI is a fascinating field, one that has seen a ton of advancements in recent years. In fact, OpenAI's ChatGPT has singlehandedly increased the hype around generative AI to new levels. But the days of ChatGPT being the only viable AI chatbot option are long gone. Now, others are available, including Anthropic's Claude AI, which has some key differences from the AI chatbot most people are familiar with. The question is this: Can Anthropic's version of ChatGPT stand up to the original?

What is Anthropic AI?

Anthropic is an AI startup co-founded by ex-OpenAI members. It's especially notable because the company has a much stricter set of ethics surrounding its AI than OpenAI currently does. The company includes the Amodei siblings, Daniela and Dario, who were instrumental in creating GPT-3.

The Amodei siblings, as well as others, left OpenAI and founded Anthropic to create an alternative to ChatGPT that addressed their AI safety concerns better. One way that Anthropic has differentiated itself from OpenAI is by training its AI to align with a "document of constitutional AI principles," like opposition to inhumane treatment, as well as support of freedom and privacy.

What is Claude AI?

Claude AI, or the latest version of the model, Claude 3, is Anthropic's version of ChatGPT. Like ChatGPT, Claude 3 is an AI chatbot with a special large language model (LLM) running behind it. However, it is designed by a different company, and thus offers some differences than OpenAI's current GPT model. It's probably the strongest competitor out of the various ChatGPT alternatives that have popped up, and Anthropic continues to update it with a ton of new features and limitations.

Anthropic technically offers four versions of Claude, including Claude 1, Claude 2, Claude-Instant, and the latest update, Claude 3. While each is similar in nature, the language models all offer some subtle differences in capability.

Can Claude do the same things as ChatGPT?

If you have any experience using ChatGPT, you're already well on your way to using Claude, too. The system uses a simple chat box, in which you can post queries to get responses from the system. It's as simple as it gets, and you can even copy the responses Claude offers, retry your question, or ask it to provide additional feedback. It's very similar to ChatGPT.

While Claude can do a lot of the same things that ChatGPT can, there are some limitations. Where ChatGPT now has internet access, Claude is only trained on the information that the developers at Anthropic have provided it with, which is limited to August 2023, according to the latest notes from Anthropic. As such, it cannot look beyond that scope.

Claude also cannot interpret or create images, something that you can now do in ChatGPT thanks to the introduction of DALL-E 3. The company does offer similar things to ChatGPT, including a cheaper and faster processing option—Claude-Instant—that is more premium than Claude 3. The previous update, Claude-2, is considered on-par with ChatGPT's GPT-4 model. Claude 3, on the other hand, has actually outperformed GPT-4 in a number of areas.

Of course, all of that pales in comparison to what OpenAI has made possible with the newly released GPT-4o. While all of its newest ground-breaking features haven't released just yet, OpenAI has really upped the ante, bringing full multimodal support to the AI chatbot. Now, ChatGPT will be able to respond directly to questions, you'll be able to interrupt its answers when using voice mode, and you can even capture both live video and your device's display and share them directly with the chatbot to get real-time responses.

How much does Claude cost?

Claude AI is actually free to try, though that freedom comes with some limitations, like how many questions you can ask and how much data the chatbot can process. There is a premium subscription, called Claude Pro, which will grant you additional data for just $20 a month.

Unlike ChatGPT's premium subscription, using the free version of Claude actually gives you access to Claude's latest model, though you miss out on the added data tokens and higher priority that a subscription offers.

How does Claude's free version compare to ChatGPT's?

Like ChatGPT, Claude offers a free version. Both are solid options to try out the AI chatbots, but if you plan to use them extensively, it's definitely worth looking at the more premium subscription plans that they offer.

While Claude gives you access to its more advanced Claude 3 in the free version, it does come with severe limits. You can't process PDFs larger than 10 megabytes, for instance, and its usage limits can vary depending on the current load. Anthropic hasn't shared an exact limit or even a range that you can expect, but CNBC estimates it's about five summaries every four hours. At the end of the day, it depends on how many people are using the system when you are. The nice thing about Claude 3 is that it brings in a ton of new features you can try out in Claude's free version, including multilingual capabilities, vision and image processing, as well as easier to steer prompting.

ChatGPT used to limit free users to GPT-3.5, locking them to the older and thus less reliable model. That, however, has changed with the release of GPT-4o, which introduces limited usage rates for free ChatGPT accounts. OpenAI hasn't shared specifics on how limited GPT-4o is with the free version, but it does give you access to all the improvements the system offers, until you eventually run out of usage and get bumped back down to GPT-3.5.

Still, that does mean you can technically use GPT-4o without paying a single cent. However, there are some limitations in place if the service is extremely busy, and you may see your requests taking much longer or even returned if usage is high. It's also possible that your free ChatGPT account may not even be available during certain times of high activity, as OpenAI sometimes limits access to free accounts to help mitigate high server usage.

It's also important to note that ChatGPT 3.5 is more likely to hallucinate than GPT-4 and the newer GPT-4o does, so it's important to double-check all the information that it provides. (That said, you should always double-check important information generated by AI.) The free version of ChatGPT also now has access to the GPT Store: Here, you can make use of various GPTs, which personalize the chatbot to respond to your questions and queries in different ways. Claude doesn't currently offer any kind of system like this, so you'll have to word your prompts correctly to get the most out of it.

Claude Pro vs. ChatGPT Plus: How much is a subscription?

If you're planning to use Claude or ChatGPT extensively, it might be worth upgrading to one of the currently available monthly plans. Both Anthropic and OpenAI offer subscription plans, so how do you decide which one to purchase? Here's how they stack up against each other.

Claude Pro costs $20 a month. Unlike ChatGPT Plus (which gives you access to OpenAI's GPT-4 and GPT-4 Turbo model), Claude already offers its latest and greatest model in the free and limited plan. As such, subscribing for $20 a month will simply reward you with at least five times the usage of the free service, making it easier to send longer messages and have longer conversations before the context tokens on the AI run out (context tokens determine how much information the AI can understand when it responds), as well as increasing the length of files that you can attach. Claude Pro will also get you faster response times and higher availability and priority when demand is high.

On the other hand, ChatGPT Plus seems to offer a bit more for that $20 subscription, as it nets you GPT-4 and GPT-4 Turbo, OpenAI's most complex and successful language models. These models are capable of far more than the free systems available in ChatGPT without a subscription. Subscribing to ChatGPT Plus will also get you faster response times, priority access when demand for the chatbot is high, and access to the newest features, such as DALL-E 3's image creation option.

Is Claude AI more accurate than ChatGPT?

Accuracy is an area that AI language models, such as those that run Claude and ChatGPT, still struggle with. While these models can be accurate and are trained on terabytes of data, they have been known to "hallucinate" and create their own facts and data.

My own experience has shown that Claude tends to be more factually accurate when summarizing things than ChatGPT, but that's based on a very small subset of data. And Claude's data is extremely outdated if you're looking to discuss recent happenings. It also doesn't have open access to the internet, so you're more limited in the possible ways that it can hallucinate or pull from bad sources, which is a blessing and a curse, as it locks you out of the good sources, too.

No matter which service you go with, they're both going to have problems, and you'll want to double-check any information that ChatGPT or Claude provides you with to ensure it isn't plagiarized from something else—or just entirely made up.

Is Claude better than ChatGPT?

There are some places where Claude is better than ChatGPT, though Claude 3 reportedly outperforms ChatGPT's latest models based on Anthropic's data. The biggest difference, for starters, is that Claude offers a much safer approach to the use of AI, with more restrictions placed upon its language models that ChatGPT just doesn't offer. This includes more restrictive ethics, though ChatGPT has continued to evolve how it approaches the ethics of AI as a whole.

Claude also offers longer context token limits than ChatGPT currently does. Tokens are broken-down pieces of text the AI can understand (OpenAI says one token is roughly four characters of text.) Claude offers 200,000 tokens for Claude 3, while GPT-4 tops out at 32,000 in some plans, which may be useful for those who want to have longer conversations before they have to worry about the AI model losing track of what they are talking about. This increased size in context tokens means that Claude is much better at analyzing large files, which is something to keep in mind if you plan to use it for that sort of thing.

However, there are also several areas that ChatGPT comes ahead. Access to the internet is a big one: Having open access to the internet means ChatGPT is always up-to-date on the latest information on the web. It also means the bot is susceptible to more false information, though, so there's definitely a trade-off. With the introduction of GPT-4o's upcoming features like voice mode, ChatGPT will be able to respond to your queries in real-time: If Claude has plans for a similar feature set, it hasn't entertained it publicly just yet.

OpenAI has also made it easy to create your own custom GPTs using its API and language models, something that, as I noted above, Claude doesn't support just yet. In addition. ChatGPT gives you in-chat image creation thanks to DALL-E 3, which is actually impressive for AI image generation.

Ultimately, Claude and ChatGPT are both great AI chatbots that offer a ton of usability for those looking to dip their toes in the AI game. If you want the latest, cutting-edge, though, the trophy currently goes to ChatGPT, as the things you're able to do with GPT-4o open entirely new doors that Claude isn't trying to open just yet.

Downranking won’t stop Google’s deepfake porn problem, victims say

Downranking won’t stop Google’s deepfake porn problem, victims say

Enlarge (credit: imaginima | E+)

After backlash over Google's search engine becoming the primary traffic source for deepfake porn websites, Google has started burying these links in search results, Bloomberg reported.

Over the past year, Google has been driving millions to controversial sites distributing AI-generated pornography depicting real people in fake sex videos that were created without their consent, Similarweb found. While anyone can be targeted—police already are bogged down with dealing with a flood of fake AI child sex images—female celebrities are the most common victims. And their fake non-consensual intimate imagery is more easily discoverable on Google by searching just about any famous name with the keyword "deepfake," Bloomberg noted.

Google refers to this content as "involuntary fake" or "synthetic pornography." The search engine provides a path for victims to report that content whenever it appears in search results. And when processing these requests, Google also removes duplicates of any flagged deepfakes.

Read 20 remaining paragraphs | Comments

Google I/O Showed Gemini Still Needs Time to Bake

During the kickoff keynote for Google I/O 2024, the general tone seemed to be, “Can we have an extension?” Google’s promised AI improvements are definitely taking center stage here, but with a few exceptions, most are still in the oven.

That’s not too surprising—this is a developer conference, after all. But it seems like consumers will have to wait a while longer for their promised "Her" moment. Here’s what you can expect once Google’s new features start to arrive.

AI in Google Search

searching for yoga with Google AI
Credit: Google/YouTube

Maybe the most impactful addition for most people will be expanded Gemini integration in Google Search. While Google already had a “generative search” feature in Search Labs that could jot out a quick paragraph or two, everyone will soon get the expanded version, “AI Overviews.”

Optionally in searches, AI Overviews can generate multiple paragraphs of information in response to queries, complete with subheadings. It will also provide additional context over its predecessor and can take more detailed prompts.

For instance, if you live in a sunny area with good weather and ask for “restaurants near you,” Overviews might give you a few basic suggestions, but also a separate subheading with restaurants that have good patio seating.

In the more traditional search results page, you’ll instead be able to use “AI organized search results,” which eschew traditional SEO to intelligently recommend web pages to you based on highly specific prompts.

For instance, you can ask Google to “create a gluten free three-day meal plan with lots of veggies and at least two desserts,” and the search page will create several subheadings with links to appropriate recipes under each.

Google is also bringing AI to how you search, with an emphasis on multimodality—meaning you can use it with more than text. Specifically, an “Ask with Video” feature is in the works that will allow you to simply point your phone camera at an object, ask for identification or repair help, and get answers via generative search.

Google didn't directly address how its handling criticism that AI search results essentially steal content from sources around the web without users needing to click through the original source. That said, demonstrators highlighted multiple times that these features bring you to useful links you can check out yourself, perhaps covering their bases in the face of these critiques.

AI Overviews are already rolling out to Google users in the US, with AI Organized Search Results and Ask with Video set for “the coming weeks.”

Search your photos with AI

Ask Photos demo
Credit: Google/YouTube

Another of the more concrete features in the works is “Ask Photos,” which plays with multimodality to help you sort through the hundreds of gigabytes of images on your phone.

Say your daughter took swimming lessons last year and you’ve lost track of your first photos of her in the water. Ask photos will let you simply ask, “When did my daughter learn to swim?" Your phone will automatically know who you mean by “your daughter,” and surface images from her first swimming lesson.

That’s similar to searching your photo library for pictures of your cat by just typing “cat,” sure, but the idea is that the multimodal AI can support more detailed questions and understand what you’re asking with greater context, powered by Gemini and the data already stored on your phone.

Other details are light, with Ask Photos set to debut “in the coming months.”

Project Astra: an AI agent in your pocket

project astra in action
Credit: Google/YouTube

Here’s where we get into more pie in the sky stuff. Project Astra is the most C-3PO we’ve seen AI get yet. The idea is you’ll be able to load up the Gemini app on your phone, open your camera, point it around, and ask for questions and help based on what your phone sees.

For instance, point at a speaker, and Astra will be able to tell you what parts are in the hardware and how they’re used. Point at a drawing of a cat with dubious vitality, and Astra will answer your riddle with “Schrödinger’s Cat.” Ask it where your glasses are, and if Astra was looking at them earlier in your shot, it will be able to tell you.

This is maybe the classical dream when it comes to AI, and quite similar to OpenAI's recently announced GPT-4o, so it makes sense that it’s not ready yet. Astra is set to come “later this year,” but curiously, it’s also supposed to work on AR glasses as well as phones. Perhaps we’ll be learning of a new Google wearable soon.

Make a custom podcast Hosted by Robots

setting up robot podcast in NoteBookLM
Credit: Google/YouTube

It’s unclear when this feature will be ready, since it seems to be more of an example for Google’s improved AI models than a headliner, but one of the more impressive (and possibly unsettling) demos Google showed off during I/O involved creating a custom podcast hosted by AI voices.

Say your son is studying physics in school, but is more of an audio learner than a text-oriented one. Supposedly, Gemini will soon let you dump written PDFs into Google’s NotebookLM app and ask Gemini to make an audio program discussing them. The app will generate what feels like a podcast, hosted by AI voices talking naturally about the topics from the PDFs.

Your son will then be able to interrupt the hosts at any time to ask for clarification.

Hallucination is obviously a major concern here, and the naturalistic language might be a little “cringe,” for lack of a better word. But there’s no doubt it’s an impressive showcase…if only we knew when we’ll be able to recreate it.

Paid features

gemini side panel
Credit: Google/YouTube

There’s a few other tools in the works that seem purpose-built for your typical consumer, but for now, they’re going to be limited to Google’s paid Workspace (and in some cases Google One AI Premium) plans.

The most promising of these is Gmail integration, which takes a three-pronged approach. The first is summaries, which can read through a Gmail thread and break down key points for you. That’s not too novel, nor is the second prong, which allows AI to suggest contextual replies for you based on information in your other emails.

But Gemini Q&A seems genuinely transformative. Imagine you’re looking to get some roofing work done and you’ve already emailed three different construction firms for quotes. Now, you want to make a spreadsheet of each firm, their quoted price, and their availability. Instead of having to sift through each of your emails with them, you can instead ask a Gemini box at the bottom of Gmail to make that spreadsheet for you. It will search your Gmail inbox and generate a spreadsheet within minutes, saving you time and perhaps helping you find missed emails.

This sort of contextual spreadsheet building will also be coming to apps outside of Gmail, but Google was also proud to show off its new “Virtual Gemini Powered Teammate.” Still in the early stages, this upcoming Workspace feature is kind of like a mix between a typical Gemini chat box and Astra. The idea is that organizations will be able to add AI agents to their Slack equivalents that will be on call to answer questions and create documents on a 24/7 basis.

Gmail’s Gemini-powered summarization features will be rolling out this month to Workspace Labs users, with its other Gmail features coming to Labs in July.

Gems

gems on stage
Credit: Google/YouTube

Earlier this year, OpenAI replaced ChatGPT plugins with “GPTs,” allowing users to create custom versions of its ChatGPT chatbots built to handle specific questions. Gems are Google’s answer to this, and work relatively similarly. You’ll be able to create a number of Gems that each have their own page within your Gemini interface, and each answer to a specific set of instructions. In Google’s demo, suggested Gems included examples like “Yoga Bestie,” which offers exercise advice.

Gems are another feature that won’t see the light of day until a few months from now, so for now, you'll have to stick with GPTs.

Agents

sundar picahi on stage
Credit: Google/YouTube

Fresh off the muted reception to the Humane AI Pin and Rabbit R1, AI aficionados were hoping that Google I/O would show Gemini’s answer to the promises behind these devices, i.e. the ability to go beyond simply collating information and actually interact with websites for you. What we got was a light tease with no set release date.

In a pitch from Google CEO Sundar Pichai, we saw the company’s intention to make AI Agents that can “think multiple steps ahead.” For example, Pichai talked about the possibility for a future Google AI Agent to help you return shoes. It could go from “searching your inbox for the receipt,” all the way to “filling out a return form,” and “scheduling a pickup,” all under your supervision.

All of this had a huge caveat in that it wasn’t a demo, just an example of something Google wants to work on. “Imagine if Gemini could” did a lot of heavy lifting during this part of the event.

New Google AI Models

veo slide on stage
Credit: Google/YouTube

In addition to highlighting specific features, Google also touted the release of new AI models and updates to its existing AI model. From generative models like Imagen 3, to larger and more contextually intelligent builds of Gemini, these aspects of the presentation were intended more for developers than end users, but there’s still a few interesting points to pull out.

The key standouts are the introduction of Veo and Music AI Sandbox, which generate AI video and sound respectively. There’s not too many details on how they work yet, but Google brought out big stars like Donald Glover and Wyclef Jean for promising quotes like, “Everybody’s gonna become a director” and, “We digging through the infinite crates.”

For now, the best demos we have for these generative models are in examples posted to celebrity YouTube channels. Here’s one below:

Google also wouldn’t stop talking about Gemini 1.5 Pro and 1.5 Flash during its presentation, new versions of its LLM primarily meant for developers that support larger token counts, allowing for more contextuality. These probably won’t matter much to you, but pay attention to Gemini Advanced.

Gemini Advanced is already on the market as Google’s paid Gemini plan, and allows a larger amount of questions, some non-developer interaction with Gemini 1.5 Pro, integration with various apps such as Docs (including some but not all of today's announced Workspace features), and uploads of files like PDFs.

Some of Google’s promised features sound like they’ll need you to have a Gemini Advanced subscription, specifically those that want you to upload documents so the chatbot can answer questions related to them or riff off them with its own content. We don’t know for sure yet what will be free and what won’t, but it’s yet another caveat to keep in mind for Google’s “keep your eye on us” promises this I/O.

That's a wrap on Google's general announcements for Gemini. That said, they also made announcements for new AI features in Android, including a new Circle to Search ability and using Gemini for scam detection. (Not Android 15 news, however: That comes tomorrow.)

AI in Gmail Will Sift Through Emails, Provide Search Summaries, Send Emails

An anonymous reader shares a report: Google's Gemini AI often just feels like a chatbot built into a text-input field, but you can really start to do special things when you give it access to a ton of data. Gemini in Gmail will soon be able to search through your entire backlog of emails and show a summary in a sidebar. That's simple to describe but solves a huge problem with email: even searching brings up a list of email subjects, and you have to click-through to each one just to read it. Having an AI sift through a bunch of emails and provide a summary sounds like a huge time saver and something you can't do with any other interface. Google's one-minute demo of this feature showed a big blue Gemini button at the top right of the Gmail web app. Tapping it opens the normal chatbot sidebar you can type in. Asking for a summary of emails from a certain contact will get you a bullet-point list of what has been happening, with a list of "sources" at the bottom that will jump you right to a certain email. In the last second of the demo, the user types, "Reply saying I want to volunteer for the parent's group event," hits "enter," and then the chatbot instantly, without confirmation, sends an email.

Read more of this story at Slashdot.

Google's Invisible AI Watermark Will Help Identify Generative Text and Video

Among Google's swath of new AI models and tools announced today, the company is also expanding its AI content watermarking and detection technology to work across two new mediums. The Verge: Google's DeepMind CEO, Demis Hassabis, took the stage for the first time at the Google I/O developer conference on Tuesday to talk not only about the team's new AI tools, like the Veo video generator, but also about the new upgraded SynthID watermark imprinting system. It can now mark video that was digitally generated, as well as AI-generated text. [...] Google had also enabled SynthID to inject inaudible watermarks into AI-generated music that was made using DeepMind's Lyria model. SynthID is just one of several AI safeguards in development to combat misuse by the tech, safeguards that the Biden administration is directing federal agencies to build guidelines around.

Read more of this story at Slashdot.

Google strikes back at OpenAI with “Project Astra” AI agent prototype

A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024.

Enlarge / A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024. (credit: Google)

Just one day after OpenAI revealed GPT-4o, which it bills as being able to understand what's taking place in a video feed and converse about it, Google announced Project Astra, a research prototype that features similar video comprehension capabilities. It was announced by Google DeepMind CEO Demis Hassabis on Tuesday at the Google I/O conference keynote in Mountain View, California.

Hassabis called Astra "a universal agent helpful in everyday life." During a demonstration, the research model showcased its capabilities by identifying sound-producing objects, providing creative alliterations, explaining code on a monitor, and locating misplaced items. The AI assistant also exhibited its potential in wearable devices, such as smart glasses, where it could analyze diagrams, suggest improvements, and generate witty responses to visual prompts.

Google says that Astra uses the camera and microphone on a user's device to provide assistance in everyday life. By continuously processing and encoding video frames and speech input, Astra creates a timeline of events and caches the information for quick recall. The company says that this enables the AI to identify objects, answer questions, and remember things it has seen that are no longer in the camera's frame.

Read 14 remaining paragraphs | Comments

Google is “reimagining” search in “the Gemini era” with AI Overviews

Search for the best pilates studioes in Boston

Enlarge / "Google will do the Googling for you," says firm's search chief. (credit: Google)

Search is still important to Google, but soon it will change. At its all-in-one AI Google I/O event Tuesday, the company introduced a host of AI-enabled features coming to Google Search at various points in the near future, which will "do more for you than you ever imagined."

"Google will do the Googling for you," said Liz Reid, Google's head of Search.

It's not AI in every search, but it will seemingly be hard to avoid a lot of offers to help you find, plan, and brainstorm things. "AI Overviews," the successor to the Search Generative Experience, will provide summary answers to questions, along with links to sources. You can also soon submit a video as a search query, perhaps to identify objects or provide your own prompts by voice.

Read 5 remaining paragraphs | Comments

AI is changing the shape of leadership – how can business leaders prepare? – Source: www.cybertalk.org

ai-is-changing-the-shape-of-leadership-–-how-can-business-leaders-prepare?-–-source:-wwwcybertalk.org

Source: www.cybertalk.org – Author: slandau By Ana Paula Assis, Chairman, Europe, Middle East and Africa, IBM. EXECUTIVE SUMMARY: From the shop floor to the boardroom, artificial intelligence (AI) has emerged as a transformative force in the business landscape, granting organizations the power to revolutionize processes and ramp up productivity. The scale and scope of this […]

La entrada AI is changing the shape of leadership – how can business leaders prepare? – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Cybersecurity Concerns Surround ChatGPT 4o’s Launch; Open AI Assures Beefed up Safety Measure

OpenAI GPT-4o security

The field of Artificial Intelligence is rapidly evolving, and OpenAI's ChatGPT is a leader in this revolution. This groundbreaking large language model (LLM) redefined the expectations for AI. Just 18 months after its initial launch, OpenAI has released a major update: GPT-4o. This update widens the gap between OpenAI and its competitors, especially the likes of Google. OpenAI unveiled GPT-4o, with the "o" signifying "omni," during a live stream earlier this week. This latest iteration boasts significant advancements across various aspects. Here's a breakdown of the key features and capabilities of OpenAI's GPT-4o.

Features of GPT-4o

Enhanced Speed and Multimodality: GPT-4o operates at a faster pace than its predecessors and excels at understanding and processing diverse information formats – written text, audio, and visuals. This versatility allows GPT-4o to engage in more comprehensive and natural interactions. Free Tier Expansion: OpenAI is making AI more accessible by offering some GPT-4o features to free-tier users. This includes the ability to access web-based information during conversations, discuss images, upload files, and even utilize enterprise-grade data analysis tools (with limitations). Paid users will continue to enjoy a wider range of functionalities. Improved User Experience: The blog post accompanying the announcement showcases some impressive capabilities. GPT-4o can now generate convincingly realistic laughter, potentially pushing the boundaries of the uncanny valley and increasing user adoption. Additionally, it excels at interpreting visual input, allowing it to recognize sports on television and explain the rules – a valuable feature for many users. However, despite the new features and capabilities, the potential misuse of ChatGPT is still on the rise. The new version, though deemed safer than the previous versions, is still vulnerable to exploitation and can be leveraged by hackers and ransomware groups for nefarious purposes. Talking about the security concerns regarding the new version, OpenAI shared a detailed post about the new and advanced security measures being implemented in GPT-4o.

Security Concerns Surround ChatGPT 4o

The implications of ChatGPT for cybersecurity have been a hot topic of discussion among security leaders and experts as many worry that the AI software can easily be misused. Since its inception in November 2022, several organizations such as Amazon, JPMorgan Chase & Co., Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, Wells Fargo and Verizon have restricted access or blocked the use of the program citing security concerns. In April 2023, Italy became the first country in the world to ban ChatGPT after accusing OpenAI of stealing user data. These concerns are not unfounded.

OpenAI Assures Safety

OpenAI reassured people that GPT-4o has "new safety systems to provide guardrails on voice outputs," plus extensive post-training and filtering of the training data to prevent ChatGPT from saying anything inappropriate or unsafe. GPT-4o was built in accordance with OpenAI's internal Preparedness Framework and voluntary commitments. More than 70 external security researchers red teamed GPT-4o before its release. In an article published on its official website, OpenAI states that its evaluations of cybersecurity do not score above “medium risk.” “GPT-4o has safety built-in by design across modalities, through techniques such as filtering training data and refining the model’s behavior through post-training. We have also created new safety systems to provide guardrails on voice outputs. Our evaluations of cybersecurity, CBRN, persuasion, and model autonomy show that GPT-4o does not score above Medium risk in any of these categories,” the post said. “This assessment involved running a suite of automated and human evaluations throughout the model training process. We tested both pre-safety-mitigation and post-safety-mitigation versions of the model, using custom fine-tuning and prompts, to better elicit model capabilities,” it added. OpenAI shared that it also employed the services of over 70 experts to identify risks and amplify safety. “GPT-4o has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. We used these learnings to build out our safety interventions in order to improve the safety of interacting with GPT-4o. We will continue to mitigate new risks as they’re discovered,” it said. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

The Cyber Express Sets the Stage to Host World CyberCon META Edition 2024 in Dubai 

World CyberCon META Edition 2024

This May, the heartbeat of the cybersecurity industry will resonate through Dubai, where The Cyber Express is set to host the much-anticipated third iteration of the World CyberCon META Edition 2024.   Scheduled for May 23, 2024, at Habtoor Palace Dubai, this premier event promises a comprehensive day filled with immersive experiences tailored to address the dynamic challenges and innovations in cybersecurity.  This year’s theme, "Securing Middle East’s Digital Future: Challenges and Solutions," lays the foundation for a unique gathering that is crucial for any professional navigating the cybersecurity landscape.   The World CyberCon META Edition will feature a stellar lineup of more than 40 prominent Chief Information Security Officers (CISOs) and other cybersecurity leaders who will share invaluable insights and strategies. Notable speakers include: 
  • Sithembile (Nkosi) Songo, CISO, ESKOM  
  • Dina Alsalamen, VP, Head of Cyber and Information Security Department, Bank ABC  
  • Anoop Kumar, Head of Information Security Governance Risk & Compliance, Gulf News  
  • Irene Corpuz, Cyber Policy Expert, Dubai Government Entity, Board Member, and Co-Founder, Women in Cyber Security Middle East (WiCSME)   
  • Abhilash Radhadevi, Head of Cybersecurity, OQ Trading  
  • Ahmed Nabil Mahmoud, Head of Cyber Defense and Security Operations, Abu Dhabi Islamic Bank 

The World CyberCon META Edition 2024

[caption id="attachment_68285" align="alignnone" width="1140"]World CyberCon META Edition 2024 Highlights from the 2023 World CyberCon in Mumbai.[/caption] A Comprehensive Platform for Learning & Innovation  The World CyberCon META Edition 2024 promises a rich agenda with topics ranging from the nuances of national cybersecurity strategies to the latest in threat intelligence and protection against advanced threats. Discussions will span a variety of crucial subjects including: 
  • Securing a Digital UAE: National Cybersecurity Strategy 
  • Predictive Cyber Threat Intelligence: Anticipating Tomorrow’s Attacks Today 
  • Navigating the Cyber Threat Terrain: Unveiling Innovative Approaches to Cyber Risk Scoring 
  • Fortifying Against Ransomware: Robust Strategies for Prevention, Mitigation, and Swift Recovery 
  • Strategic Investments in Cybersecurity: Leveraging AI and ML for Enhanced Threat Detection 
Who Should Attend?  The World CyberCon META Edition 2024 is tailored for CISOs, CIOs, CTOs, security auditors, heads of IT, cybercrime specialists, and network engineers. It’s an invaluable opportunity for those invested in the future of internet safety to gain insights, establish connections, and explore new business avenues.  Engage and Network  In addition to knowledge sessions, the conference will feature interactive workshops, an engaging exhibition zone, and plenty of networking opportunities. This event is set to honor the significant contributions of cybersecurity professionals and provide them with the recognition they deserve.  Secure Your Place  Don’t miss this unique chance to connect with leading professionals and gain insights from the forefront of cybersecurity. Reserve your spot at World CyberCon META Edition 2024 by visiting (https://thecyberexpress.com/cyber-security-events/world-cybercon-3rd-edition-meta/).  More Information  For more details on the event sponsorship opportunities and delegate passes, please contact Ashish Jaiswal at ashish.j@thecyberexpress.com.  About The Cyber Express  Stay informed with TheCyberExpress.com, your essential source for cybersecurity news, insights, and resources, dedicated to empowering you with the knowledge needed to protect your digital assets.   Join us in shaping the digital future at World CyberCon META Edition 2024 in Dubai. Let’s secure tomorrow together! 

Slashdot Asks: How Do You Protest AI Development?

An anonymous reader quotes a report from Wired: On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order. "What do we want? Safe AI! When do we want it?" The protesters hesitate. "Later?" someone offers. The group of mostly young men huddle for a moment before breaking into a new chant. "What do we want? Pause AI! When do we want it? Now!" These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity. Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and ahandful of other cities. Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit -- a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message. "The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." [...] There is also the question of how PauseAI should achieve its aims. On the group's Discord, some members discussed the idea of staging sit-ins at the headquarters of AI developers. OpenAI, in particular, has become a focal point of AI protests. In February, Pause AI protests gathered in front of OpenAI'sSan Francisco offices, after the company changed its usage policies to remove a ban on military and warfare applications for its products. Would it be too disruptive if protests staged sit-ins or chained themselves to the doors of AI developers, one member of the Discord asked. "Probably not. We do what we have to, in the end, for a future with humanity, while we still can." [...] Director of Pause AI US, Holly Elmore, wants the movement to be a "broad church" that includes artists, writers, and copyright owners whose livelihoods are put at risk from AI systems that can mimic creative works. "I'm a utilitarian. I'm thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent" from companies producing AI models, she says. "We don't have to choose which AI harm is the most important when we're talking about pausing as a solution. Pause is the only solution that addresses all of them." [Joseph Miller, the organizer of PauseAI's protest in London] echoed this point. He says he's spoken to artists whose livelihoods have been impacted by the growth of AI art generators. "These are problems that are real today, and are signs of much more dangerous things to come." One of the London protesters, Gideon Futerman, has a stack of leaflets he's attempting to hand out to civil servants leaving the building opposite. He has been protesting with the group since last year. "The idea of a pause being possible has really taken root since then," he says. According to Wired, the leaders of Pause AI said they were not considering sit-ins or encampments near AI offices at this time. "Our tactics and our methods are actually very moderate," says Elmore. "I want to be the moderate base for a lot of organizations in this space. I'm sure we would never condone violence. I also want Pause AI to go further than that and just be very trustworthy." Meindertsma agrees, saying that more disruptive action isn't justified at the moment. "I truly hope that we don't need to take other actions. I don't expect that we'll need to. I don't feel like I'm the type of person to lead a movement that isn't completely legal." Slashdotters, what is the most effective way to protest AI development? Is the AI genie out of the bottle? Curious to hear your thoughts

Read more of this story at Slashdot.

ChatGPT Is Getting a Mac App

OpenAI has launched an official macOS app for ChatGPT, with a Windows version coming "later this year." "Both free and paid users will be able to access the new app, but it will only be available to ChatGPT Plus users starting today before a broader rollout in 'the coming weeks,'" reports The Verge. From the report: In the demo shown by OpenAI, users could open the ChatGPT desktop app in a small window, alongside another program. They asked ChatGPT questions about what's on their screen -- whether by typing or saying it. ChatGPT could then respond based on what it "sees." OpenAI says users can ask ChatGPT a question by using the Option + Space keyboard shortcut, as well as take and discuss screenshots within the app. Further reading: OpenAI Launches New Free Model GPT-4o

Read more of this story at Slashdot.

❌