Microsoft Details How It's Developing AI Responsibly
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
With large language models needing quality data, some publishers are offering theirs at a price while others are blocking access
OpenAI, the developer of ChatGPT, knows that high-quality data matters in the artificial intelligence business – and news publishers have vast amounts of it.
“It would be impossible to train today’s leading AI models without using copyrighted materials,” the company said this year in a submission to the UK’s House of Lords, adding that limiting its options to books and drawings in the public domain would create underwhelming products.
Continue reading...Source: www.techrepublic.com – Author: Fiona Jackson It is taking less time for organisations to detect attackers in their environment, a report by Mandiant Consulting, a part of Google Cloud, has found. This suggests that companies are strengthening their security posture. The M-Trends 2024 report also highlighted that the top targeted industries of 2023 were financial […]
La entrada Top 5 Global Cyber Security Trends of 2023, According to Google Report – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.
Read more of this story at Slashdot.
On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was "just for fun," but with his influential profile in the field, the idea may inspire others in the future.
Karpathy's bona fides in AI almost speak for themselves, receiving a PhD from Stanford under computer scientist Dr. Fei-Fei Li in 2015. He then became one of the founding members of OpenAI as a research scientist, then served as senior director of AI at Tesla between 2017 and 2022. In 2023, Karpathy rejoined OpenAI for a year, leaving this past February. He's posted several highly regarded tutorials covering AI concepts on YouTube, and whenever he talks about AI, people listen.
Most recently, Karpathy has been working on a project called "llm.c" that implements the training process for OpenAI's 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn't necessarily require complex development environments. The project's streamlined approach and concise codebase sparked Karpathy's imagination.
Source: www.cybertalk.org – Author: slandau By Grant Asplund, Cyber Security Evangelist, Check Point. For more than 25 years, Grant Asplund has been sharing his insights into how businesses can best protect themselves from sophisticated cyber attacks in an increasingly complex world. Grant was Check Point first worldwide evangelist from 1998 to 2002 and returned to […]
La entrada AI, CVEs and Swiss cheese – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.
Source: www.techrepublic.com – Author: Fiona Jackson The realm of the Internet of Things encompasses more than just the latest products. As the network of connected devices grows — the number worldwide is expected to reach over 29 billion in 2027 — so do the policies, responsibilities and innovations that surround it, all of which contribute […]
La entrada 4 IoT Trends U.K. Businesses Should Watch in 2024 – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.
Read more of this story at Slashdot.
The blockchain analysis company is using a deep learning model, new AI techniques, and a massive dataset to better detect and track money laundering on a Bitcoin blockchain.
The post Elliptic Shows How an AI Model Can Identify Bitcoin Laundering appeared first on Security Boulevard.
50,000 security practitioners are about to attend RSA 2024. Here’s what one expert anticipates for this year’s show.
The post What to Expect at RSA 2024: Will AI Wreak Havoc on Cybersecurity? appeared first on Security Boulevard.
Read more of this story at Slashdot.
ChatGPT Plugins were a great addition to the ChatGPT Plus plan: They acted like browser extensions for ChatGPT, adding third-party functionality to the chatbot that OpenAI didn't build in itself.
Unfortunately for fans of these plugins, they're now no longer available. OpenAI discontinued them back in April, informing users that existing conversations with plugins couldn't be continued. (They are, however, still viewable.) OpenAI didn't take away a great function for the hell of it, though: The company made the decision because it saw its new tool, GPTs, as an improved successor.
This can get a bit confusing, since OpenAI has two different uses for the name "GPT." The one you might be more familiar with is the GPT LLMs: This includes GPT-3.5, GPT-4, GPT-4 Turbo, etc. These GPT LLMs are what power ChatGPT, as well as programs that outsource their AI processing to OpenAI. Microsoft's Copilot, for example, uses GPT-4 Turbo.
GPTs in this context, on the other hand, are customized versions of ChatGPT. Users and developers alike can create a custom GPT to do whatever they want: For example, you can make a GPT that designs custom logos, generates images using DALL-E, or writes in a manner of your choosing. They can be basic bots, or full of complexity.
Best of all, it's a no-code application: You'd assume to build one of these programs, you'd need to know how to write the code to make it work. But OpenAI's GPT builder works as a conversation. You tell the builder what you want your GPT to do, upload additional knowledge to help the bot work, as well as choose its capabilities (web browsing, image generation, and code interpreting), and it generates you a GPT. And since GPTs are trained on OpenAI's latest GPT models (again, confusing, I know), there's less dependency on third-party processes or APIs to achieve the same tasks.
You still need a ChatGPT Plus or Enterprise account to use GPTs, but if you're a paying customer, you can get started building from here. The builder walks you through the entire process, including suggesting names and generating logos. You're free to adjust anything as you see fit along the way.
While it's great that OpenAI made a GPT builder that literally anyone can use, it doesn't seem to fill the exact void that plugins left. After all, you don't necessary want to build your own browser extensions: You want to download the best ones off your browser's web store and be done with it.
But GPTs aren't just about self-creation: Anyone, including companies, can make GPTs and put them on the GPT Store for others to use. Part of the reason OpenAI killed plugins is that many of the same companies that worked on these apps also have GPTs that do the same thing. Kayak had a ChatGPT plugin for checking prices on travel, but it now has a GPT for it instead. If you liked Wolfram's ChatGPT plugin, you'll probably like its GPT as well. OpenAI says while the plugins beta had just over 1,000 plugins to choose from, the GPT Store has hundreds of thousands of GPTs to use. While there is no doubt plenty of junk on the GPT Store to sift through, chances are you'll find GPTs you want to use.
If there was a particular plugin you loved, try searching on the GPT Store for it. (Kayak and Wolfram came right up.) Of course, the sheer number of GPTs on the store means the situation has changed considerably: Like other app stores, the GPT Store has "Featured" and "Trending" tabs for finding GPTs OpenAI and other ChatGPT users like. OpenAI is currently selling me on a wine sommelier GPT, as well as a language teacher.
Scroll through the GPT Store and see if any of the promoted options appeal to you. Then, make a search for applications you're looking for and see if any have already been made. You can get an idea of how well-liked the GPT is by the reviews and number of conversations it has been used for, similar to checking ratings on an app store. However, if you don't find what you're looking for in a search, you might want to try building the GPT yourself.
Of course, like all AI products, GPTs can hallucinate. In other words, they sometimes just make things up. Don't take everything your GPT responds with as certified fact, even if it does have access to the web. If you're using the GPT for anything important, always fact check before using the information it provides.
Anthropic is freeing its AI chatbot, Claude, from your desktop. The highly successful ChatGPT competitor has now made the jump to iOS, launching its first-ever official chatbot app for the iPhone.
The new iOS app is available to download right now, and can do everything that Claude on the web can do, including analyzing images and helping you brainstorm. It can also use images straight from your mobile library and even take new photos for immediate upload. Pro users still get full access to premium features, including the model selector and a greatly increased message limit. With such a fully-featured mobile experience, Claude is one step closer to closing any gaps between itself and ChatGPT.
To try out the Claude app for yourself, download it directly from the App Store. Just be careful not to install any imposters. One of the first options that shows up when searching for Claude is an app called "Chat AI with Claude," which tracks quite a lot of your personal information, including purchases, device ID, user ID, app usage, and more. Either be careful when searching, or just click this link to go directly to Claude's App Store listing.
Anthropic says that Claude’s iOS app will allow for seamless syncing between the web and mobile app, so you can move from your laptop to your iPhone and pick up where you left off. You can also take photos and import them directly into the app to analyze them. The mobile app is available for all Claude users, including free users, although there's no word on an Android version yet.
On top of launching its iOS app, Anthropic also recently debuted a new Teams subscription for the AI chatbot, which offers the premium version of the chatbot for just $30 a user per month, with a minimum of five users.
Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.
The post AI Security Startup Apex Emerges From Stealth With Funding From OpenAI CEO appeared first on SecurityWeek.
Read more of this story at Slashdot.
For years, the media has been warning me that artificial intelligence will soon take over my job. Several years later, I'm somehow still out here making a living as a writer, perhaps because AI can't always be trusted to do the job well.
Obviously, AI can't take over for me as a parent either, but I was curious how the technology could help me with the job I don't get paid for: being a dad to two cool kids.I started doing some research and discovered there are plenty of practical uses for AI in the parenting realm. However, there are also some things AI can ostensibly do that made me think twice about what the innovation is capable of.
Here are some sensible uses for AI to help you with your caregiver duties and others that seem, as my 7-year-old is fond of saying, a bit “sus.”
Suppose you need help to start a meaningful conversation with your child or are trying to figure out where to begin with a sensitive topic. In that case, AI can be a useful tool to initiate a discussion. However, it's crucial to remember that AI is just a tool—don't expect your child to open up to you solely because you used a script generated by a computer program. You need to be actively involved and verify the suggestions the AI provides, and then put them to use while meaningfully connecting with your kid.
AI is not a substitute for your role as a parent. While some tools, such as Oath Care, are helpful to mothers in the early stages of child care, you should find a parenting style that best fits your family and your values. Since AI doesn't know you or your child first-hand, the information and advice it offers might not fit your temperament or style.
As someone who makes up silly bedtime stories for my boys on the fly, I occasionally need help hand crafting a compelling yarn to get them to sleep. You can use programs like Hypotenuse to write a story for you. All you need to do is give it some details to get started.
Children crave the unique touch that only a parent can provide in a bedtime story. They yearn for your voice, expressions, and personal touch. No computer program can replicate that. It's your presence that gives the words life and captivates their imagination.
If you and your family are tired of takeout but don't have the time to plan meals, AI can help. For instance, you can use ChatGPT to generate meal ideas based on the ingredients you have on hand. According to this report, you can enter a few prompts into the free version to help generate breakfast, lunch, and dinner ideas for your family without searching everywhere for a recipe. However, it's important to note that these tools are imperfect and may not always provide the most suitable meal options for your family's preferences and dietary needs.
Reports say we're nearing a point when robots will be able to grocery shop for us. That sounds cool, but as someone who frequently uses the Target app to order groceries, I currently can't get a human to pick out a properly ripe banana, and I I doubt a robot will be able to do much better. Having AI do my grocery shopping also removes any sense of discovery from the experience, meaning my kids and I might never know about a new product on the shelves.
Any pediatrician will offer plenty of checklists to help parents track their child's development. However, getting your kid in for a checkup can take a while. For parents concerned about their child's development, there are AI apps to keep track of cognitive, social, and language development milestones. Some could even help detect autism early on. Many of these programs have yet to be clinically evaluated, but they do sound promising, so don't take the results as gospel, but do compile them to discuss with your pediatrician.
Using AI to save time or make more informed decisions seems perfectly reasonable. However, using the technology to help you decide if it's time for your child to, say, have a social media account, or if they should be homeschooled feels a bit off.
An AI program such as Bottell may claim to offer personalized advice for you and your child, but only you know what's suitable for your offspring. AI is not a substitute for your own judgment and understanding of your child's needs and circumstances.
If a rainy day has ruined outdoor plans, parents can use Chat GPT to generate ideas for bored kids. Ask the program for ideas for games, science experiments, puzzles, and crafts that families can do indoors. You can even ask it to give them an educational bent so children can learn something while they have fun.
They want to play with you, not a computer.
This one is a tough call. While Sal Khan, the founder and CEO of Khan Academy, shared his belief in a popular TED Talk that AI can be used as an educational tool to tutor kids worldwide, The Wall Street Journal found that his AI-powered education bot Khanmigo often couldn't perform basic math. A company spokesperson told reporter Matt Barnum that upgrades were made to improve the bot's accuracy.
If you don't remember how to do the math your child is struggling with, you can consider asking an AI for help describing how to solve a problem—but given that Khan Academy emphasizes to educators that the technology isn't perfect, you'll still need to double-check the figures and formulas on your kids' homework for the time being.
Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing technology.
The post Japan’s Kishida Unveils a Framework for Global Regulation of Generative AI appeared first on SecurityWeek.
Source: www.techrepublic.com – Author: Megan Crouse Security researchers in Adobe’s bug bounty program can now pick up rewards for finding vulnerabilities in Adobe Firefly and Content Credentials. The bug hunt will be open to members of Adobe’s private bug bounty program starting May 1. Members of Adobe’s public bug bounty program will be eligible to […]
La entrada Adobe Adds Firefly and Content Credentials to Bug Bounty Program – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.
Source: grahamcluley.com – Author: Graham Cluley The UK Government takes aim at IoT devices shipping with weak or default passwords, an identity thief spends two years in jail after being mistaken for the person who stole his name, and are you au fait with the latest scams? All this and much more is discussed in […]
La entrada Smashing Security podcast #370: The closed loop conundrum, default passwords, and Baby Reindeer – Source: grahamcluley.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.
Read more of this story at Slashdot.
On Wednesday, Anthropic announced the launch of an iOS mobile app for its Claude 3 AI language models that are similar to OpenAI's ChatGPT. It also introduced a new subscription tier designed for group collaboration. Before the app launch, Claude was only available through a website, an API, and other apps that integrated Claude through API.
Like the ChatGPT app, Claude's new mobile app serves as a gateway to chatbot interactions, and it also allows uploading photos for analysis. While it's only available on Apple devices for now, Anthropic says that an Android app is coming soon.
Anthropic rolled out the Claude 3 large language model (LLM) family in March, featuring three different model sizes: Claude Opus, Claude Sonnet, and Claude Haiku. Currently, the app uses Sonnet for regular users and Opus for Pro users.
Read more of this story at Slashdot.
In mid-June 2019, Microsoft co-founder Bill Gates and CEO Satya Nadella received a rude awakening in an email warning that Google had officially gotten too far ahead on AI and that Microsoft may never catch up without investing in OpenAI.
With the subject line "Thoughts on OpenAI," the email came from Microsoft's chief technology officer, Kevin Scott, who is also the company’s executive vice president of AI. In it, Scott said that he was "very, very worried" that he had made "a mistake" by dismissing Google's initial AI efforts as a "game-playing stunt."
It turned out, Scott suggested, that instead of goofing around, Google had been building critical AI infrastructure that was already paying off, according to a competitive analysis of Google's products that Scott said showed that Google was competing even more effectively in search. Scott realized that while Google was already moving on to production for "larger scale, more interesting" AI models, it might take Microsoft "multiple years" before it could even attempt to compete with Google.
Traceable AI has raised $110 million since launching in 2018 with ambitious plans in the competitive API security and observability space.
The post Traceable AI Raises $30 Million to Safeguard Cloud APIs appeared first on SecurityWeek.
Challenger to market leader OpenAI says it wants to ‘meet users where they are’ and become part of users’ everyday life
OpenAI’s ChatGPT is facing serious competition, as the company’s rival Anthropic brings its Claude chatbot to iPhones. Anthropic, led by a group of former OpenAI staff who quit over differences with chief executive Sam Altman, have a product that already beats ChatGPT on some measures of intelligence, and now wants to win over everyday users.
“In today’s world, smartphones are at the centre of how people interact with technology. To make Claude a true AI assistant, it’s crucial that we meet users where they are – and in many cases, that’s on their mobile devices,” said Scott White at Anthropic.
Continue reading...When it comes to judging which large language models are the "best," most evaluations tend to look at whether or not a machine can retrieve accurate information, perform logical reasoning, or show human-like creativity. Recently, though, a team of researchers at Georgia State University set out to determine if LLMs could match or surpass human performance in the field of moral guidance.
In "Attributions toward artificial agents in a modified Moral Turing Test"—which was recently published in Nature's online, open-access Scientific Reports journal—those researchers found that morality judgments given by ChatGPT4 were "perceived as superior in quality to humans'" along a variety of dimensions like virtuosity and intelligence. But before you start to worry that philosophy professors will soon be replaced by hyper-moral AIs, there are some important caveats to consider.
For the study, the researchers used a modified version of a Moral Turing Test first proposed in 2000 to judge "human-like performance" on theoretical moral challenges. The researchers started with a set of 10 moral scenarios originally designed to evaluate the moral reasoning of psychopaths. These scenarios ranged from ones that are almost unquestionably morally wrong ("Hoping to get money for drugs, a man follows a passerby to an alley and holds him at gunpoint") to ones that merely transgress social conventions ("Just to push his limits, a man wears a colorful skirt to the office for everyone else to see.")
OpenAI has been showcasing Sora, its artificial intelligence video-generation model, to media industry executives in recent weeks to drum up enthusiasm and ease concerns about the potential for the technology to disrupt specific sectors.
The Financial Times wanted to put Sora to the test, alongside the systems of rival AI video generation companies Runway and Pika.
We asked executives in advertising, animation, and real estate to write prompts to generate videos they might use in their work. We then asked them their views on how such technology may transform their jobs in the future.
AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.
The post DeepKeep Launches AI-Native Security Platform With $10 Million in Seed Funding appeared first on SecurityWeek.
The British brand has entered the booming market in luxury ‘car-chitecture’, opening a themed tower in Miami boasting ballroom, helipad and infinity pool – all offering millionaires a perfect view of our choking, collapsing world
Move over, James Bond – a new Aston Martin has rolled into town, brimming with more flashy features than Q could ever dream of. Parked ostentatiously on the Miami waterfront, overlooking a private marina brimming with superyachts, its streamlined flanks glisten in the Florida sunshine, housing an interior trimmed with the finest leathers and exotic wood veneers. There’s no ejector seat or rocket-launcher, but it is the biggest Aston Martin ever made – housing Jacuzzi, bar, cinema, golf simulator, art gallery, ballroom and infinity pool, all crowned with a 66th-storey helipad.
Unveiled in the week of the Miami Grand Prix, the latest exclusive model from the timeless British automotive brand is not a high-performance sports car, but an ultra-luxury apartment building – the tallest residential tower in the US, south of New York. After Aston Martin’s years of financial woes, following a disastrous stock market performance since the company’s 2018 listing, it seems that the boutique car-
maker is seeking salvation in property development.
Read more of this story at Slashdot.
Source: www.darkreading.com – Author: Microsoft Security 3 Min Read Source: josefotograf via Alamy Generative artificial intelligence (GenAI) has emerged as a significant change-maker, enabling teams to innovate faster, automate existing workflows, and rethink the way we go to work. Today, more than 55% of companies are currently piloting or actively using GenAI solutions. But for all its […]
La entrada How to Red Team GenAI: Challenges, Best Practices, and Learnings – Source: www.darkreading.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.
On Tuesday, the US General Services Administration began an auction for the decommissioned Cheyenne supercomputer, located in Cheyenne, Wyoming. The 5.34-petaflop supercomputer ranked as the 20th most powerful in the world at the time of its installation in 2016. Bidding started at $2,500, but it's price is currently $27,643 with the reserve not yet met.
The supercomputer, which officially operated between January 12, 2017, and December 31, 2023, at the NCAR-Wyoming Supercomputing Center, was a powerful (and once considered energy-efficient) system that significantly advanced atmospheric and Earth system sciences research.
"In its lifetime, Cheyenne delivered over 7 billion core-hours, served over 4,400 users, and supported nearly 1,300 NSF awards," writes the University Corporation for Atmospheric Research (UCAR) on its official Cheyenne information page. "It played a key role in education, supporting more than 80 university courses and training events. Nearly 1,000 projects were awarded for early-career graduate students and postdocs. Perhaps most tellingly, Cheyenne-powered research generated over 4,500 peer-review publications, dissertations and theses, and other works."
On Sunday, word began to spread on social media about a new mystery chatbot named "gpt2-chatbot" that appeared in the LMSYS Chatbot Arena. Some people speculate that it may be a secret test version of OpenAI's upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo.
Currently, the new model is only available for use through the Chatbot Arena website, although in a limited way. In the site's "side-by-side" arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day—dramatically limiting people's ability to test it in detail.
So far, gpt2-chatbot has inspired plenty of rumors online, including that it could be the stealth launch of a test version of GPT-4.5 or even GPT-5—or perhaps a new version of 2019's GPT-2 that has been trained using new techniques. We reached out to OpenAI for comment but did not receive a response by press time. On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting, "i do have a soft spot for gpt2."
The Chicago Tribune, Denver Post and others file suit saying the tech companies ‘purloin millions’ of articles without permission
A group of eight US newspapers is suing ChatGPT-maker OpenAI and Microsoft, alleging that the technology companies have been “purloining millions” of copyrighted news articles without permission or payment to train their artificial intelligence chatbots.
The New York Daily News, Chicago Tribune, Denver Post and other papers filed the lawsuit on Tuesday in a New York federal court.
Continue reading...Apple has poached dozens of artificial intelligence experts from Google and has created a secretive European laboratory in Zurich, as the tech giant builds a team to battle rivals in developing new AI models and products.
According to a Financial Times analysis of hundreds of LinkedIn profiles as well as public job postings and research papers, the $2.7 trillion company has undertaken a hiring spree over recent years to expand its global AI and machine learning team.
The iPhone maker has particularly targeted workers from Google, attracting at least 36 specialists from its rival since it poached John Giannandrea to be its top AI executive in 2018.
You will see only a tiny fraction of the billions of videos on YouTube in your lifetime—which may be for the best. There are some videos where you just want the key points, and you have to sit through a lot of nonsense to get to it. That's wasted time. What if you could cut short your viewing time by summarizing the key information in the videos you watch? Fortunately, Gemini, Google's AI chatbot, has a YouTube extension built in and enabled by default.
All available extensions are enabled by default in Gemini. But if you need to check, here's where you should go on the desktop and an Android or iOS phone.
On the desktop, open Gemini in your browser. Ensure you are logged into the Google account you want to use. Then, click Settings on the left sidebar and select Extensions in the menu. Toggle the switch for YouTube if it's not blue.
On your mobile, open the Gemini app (Android only) or open Gemini in the Google app (iOS). You can also access it on the mobile browser. Tap on your profile photo and select Extensions to open the list. Enable YouTube with the toggle switch if it's disabled.
Open the video you want to watch and summarize. Copy its URL from the address bar if on desktop, and the Share menu on mobile.
Paste the link into Gemini, and use a natural language prompt like "Summarize this video" or "Give me a quick summary."
As this screenshot shows, it did an accurate job with a video I had just watched:
Note: Gemini summarizes YouTube videos using text that YouTube automatically generates, like captions and transcripts. If a video doesn't have them, it won't be able to extract anything from it. Also, the summarization feature isn't supported for YouTube videos in every language: it's only available in English, Japanese, and Korean.
This summarization feature is especially handy if you need to pluck the key details out of the video: for instance, the price or a price comparison of the products that are being reviewed.
Tip: I often use it to generate the main points and check if a long YouTube video is worth watching, especially if the description and comments don't suggest anything.
You can ask Gemini to recommend a few videos on a topic of your choice. Then, in a follow-up, you can ask Gemini to summarize a specific video—or all of them.
The Gemini and YouTube pairing works well with well-structured and informative videos. This method can quickly give you an overview of a topic before you dive into the deep end. And with the right prompts, you can start a Q&A session with Gemini on the videos and create your own "Sparknotes" for learning from a bunch of videos.
Asking Gemini to dress up the information in a nice table is visually helpful when the YouTube video compares two items (for instance, which laptop to buy). You can also ask Gemini to present their pros and cons. Sometimes, the AI does this without any additional prompting.
The goal is to enable cybersecurity and data science teams to work together and share their expertise.
The post Sysdig Extends CNAPP Reach to AI Workloads appeared first on Security Boulevard.
Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.
The post Why Using Microsoft Copilot Could Amplify Existing Data Quality and Privacy Issues appeared first on SecurityWeek.
Read more of this story at Slashdot.
Source: www.techrepublic.com – Author: Fiona Jackson The GPT-4 large language model from OpenAI can exploit real-world vulnerabilities without human intervention, a new study by University of Illinois Urbana-Champaign researchers has found. Other open-source models, including GPT-3.5 and vulnerability scanners, are not able to do this. A large language model agent — an advanced system based […]
La entrada OpenAI’s GPT-4 Can Autonomously Exploit 87% of One-Day Vulnerabilities, Study Finds – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.
Source: www.techrepublic.com – Author: Fiona Jackson AI’s newfound accessibility will cause a surge in prompt hacking attempts and private GPT models used for nefarious purposes, a new report revealed. Experts at the cyber security company Radware forecast the impact that AI will have on the threat landscape in the 2024 Global Threat Analysis Report. It […]
La entrada Prompt Hacking, Private GPTs, Zero-Day Exploits and Deepfakes: Report Reveals the Impact of AI on Cyber Security Landscape – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.
CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s critical services from “AI-related disruptions.”
The post Tech CEOs Altman, Nadella, Pichai and Others Join Government AI Safety Board Led by DHS’ Mayorkas appeared first on SecurityWeek.
Read more of this story at Slashdot.
Like it or not, LinkedIn is still one of the best ways to search for jobs online. But since 2023, the site has been experimenting with generative AI, making it possible to get AI help with finding new jobs, writing messages, connecting with others, and building your profile and job descriptions. Some users are even seeing AI prompts showing up under every post.
While posed as helpful, that kind of AI integration can get intrusive fast, as evidenced by comments asking how to turn LinkedIn’s AI off under posts advertising it. If you’d rather keep your online recruitment and job searches as human-powered as possible, here’s a quick breakdown of LinkedIn’s AI features and which ones you can turn off.
LinkedIn’s AI integration is pretty ubiquitous across the site, but there’s a catch: it’s reserved for Premium users. That means free users don’t have to lift a finger if they want to skip AI on LinkedIn. They’ll still see the occasional ad recommending they buy Premium to access a certain AI feature, but Premium ads aren’t exactly a new thing for LinkedIn.
If you do pay for Premium, your AI integration is going to be a bit harder to ignore–LinkedIn considers it part of your subscription, so it’s not going to want you to turn off these paid features.
LinkedIn currently uses AI in jobs pages, its recruiter tools, under posts, and in most text boxes. Some but not all of these can be turned off, and more annoyingly, the AI features you have access to differs across Premium tiers.
There are four areas where LinkedIn’s AI integration is most prevalent. The first is on job listings.
With the Career tier of Premium, which I signed up for a free trial of while writing this article, job listings will now show prompts for LinkedIn’s AI chatbot underneath the job description. These include questions like “Am I a good fit for this job?” and “How can I best position myself for the job?” Answers to these usually read like summaries of either your job profile or the job description, while “Tell me more about [employer]” largely summarizes the company’s LinkedIn page.
The second is in LinkedIn Recruiter, where users can run AI-assisted candidate searches, get help filling out fields in projects, and send AI-assisted messages. These features require an enterprise level LinkedIn Recruiter subscription, so I wasn’t able to test them for this article. Note that LinkedIn Premium's Recruiter Lite tier does not get access to these tools.
Premium users will also find AI in most of LinkedIn’s text boxes as well as on their profile. Here, LinkedIn will offer to help draft messages, posts, your profile’s headline or about pages. An odd quirk: Sales Navigator Core and Recruiter Lite packages, despite their higher cost compared to the Career and Business tiers, do not have access to AI message drafts.
Perhaps the most visible of LinkedIn’s AI features are the “AI takeaways on feed posts.” On occasion, these will show up next to sparkle icons while browsing your feed, and will suggest questions related to the post. Clicking on them will open LinkedIn's AI chatbot and ask the question.
The bad news is that most of LinkedIn’s AI features can’t be toggled off, so your best bet is to only sign up for the Premium tier with the features you want. A short list of available AI features is visible when signing up. Once you’ve signed up, you can double check which AI features you have access to by clicking the “See your Premium features” tab in the site’s top-left corner.
That said, there are a couple of steps you can take to make AI less prevalent on your feed. The most direct way to disable LinkedIn AI is in LinkedIn recruiter, where the ability to send AI-assisted messages can be turned off on both an admin and seat level.
To turn off AI-assisted messages in LinkedIn Recruiter’s admin tools, hover over your profile on your Recruiter homepage and click Product Settings. Navigate to Company Settings > Preferences in the left rail and click Edit under Enable AI-assisted message auto-draft. Toggle AI-assisted messages Off and click save.
To turn off AI-assisted messages on Recruiter’s seat level, hover over your profile on your Recruiter homepage, select Product Setting from the dropdown menu, then click Messaging under My Account settings on the left rail. Click Edit under Enable AI-assisted auto-draft, toggle the feature off, and click Save.
All other users can easily ignore LinkedIn’s AI-assisted messages, even if they can’t outright disable them. That’s because AI messages are currently only visible when clicking Message either in the Meet the hiring team section of the jobs page or in the introduction section of another user’s profile. Messages made via the Messaging window in the bottom-right corner will not show the Write with AI prompt.
Sadly, there is no way to keep the Write with AI prompt from appearing when writing a new post or editing your profile, so it’s important to know what it looks like to avoid accidentally clicking into it.
When editing your profile's Headline or About section, the Write with AI box will appear underneath your text box with a gold sparkle next to it and a Premium tag to the right. Avoid clicking it to keep from using the AI, but don’t worry if you do accidentally click it. If you don’t like what the AI has suggested, you can click the Revert button to undo its changes and the Thumbs Down button to mark the suggestion as bad.
It’s a bit easier to ignore AI integration on LinkedIn posts, as the Rewrite with AI button will be grayed out until you’ve already written a few lines of text. If you do accidentally click it, click the Undo button to get rid of the changes to your text. You’ll also still be able to give the AI-rewrite either a thumbs up or thumbs down.
As for the AI prompts on job listings or the AI takeaways on posts in your feed? The best way to avoid them is simply to not sign up for Premium.
Apple is unquestionably late to the AI party. While Microsoft and Google have rolled out and integrated proprietary generative AI tech into their massive platforms, Apple has remained quiet on the trend—and the silence is surprising, considering the heat AI has generated, and the fact that Apple is one of the world's most valuable tech companies. Hell, even Meta is all-in on adding intrusive AI features to its products.
But if rumor and speculation are to be believed, Apple is ready to make some noise. The company is widely expected to roll out major AI features as part of its new suite of software updates this year, including iOS 18. We don't even need to turn to the rumor mill or trust unsubstantiated claims to infer this: Apple researchers have already publicized much of their progress on AI, such as work on the company's proprietary AI model, an AI image editor, and an AI image animator.
Even with all this work done in-house, Apple might not have the resources to pull of all its upcoming AI features together without an assist. According to Bloomberg's Mark Gurman, the company is currently in talks to outsource some of its AI processing needs to OpenAI and its generative AI technology. If a deal were to go through, Apple could use OpenAI's GPT models to run a chatbot, like ChatGPT, in iOS, among other new features.
This isn't the first time the company has approached OpenAI about such a deal, nor is it the first time Apple has looked to a third-party for AI processing. We know, for example, the company is in discussions with Google to license Gemini for some of its AI ventures. It seems Apple is still exploring its options for who to partner with, and could even go with another party altogether.
You would think the company would be a bit more concerned about the timing of these deals, however: WWDC is just over two months away, and that's when all eyes will be on Apple to see what the company has been cooking in the AI department. ChatGPT launched at the end of 2022, kicking off this AI frenzy; Apple will be joining the party a year and a half late, and the tech world will be taking note of how much (or how little) the company is doing to embrace AI in the near-term.
It's possible that Apple is hedging its bets until then, seeing how much they can handle on their own before dedicating itself to outsourcing AI process. If the company can power an AI-upgraded Siri on its own, on-device, that would be much better for them than relying on Google or OpenAI's tech. Anything outsourced to other companies will likely need to be handled in the cloud, which decreases their security. On-device AI would keep your information restricted to your iPhone, while cloud-based AI could leave your data exposed to the eyes of Google, OpenAI, or whoever else Apple may partner with.
New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.
The post CISA Rolls Out New Guidelines to Mitigate AI Risks to US Critical Infrastructure appeared first on SecurityWeek.