Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Picking fights and gaining rights, with Justin Brookman: Lock and Code S05E09

22 April 2024 at 11:46

This week on the Lock and Code podcast…

Our Lock and Code host, David Ruiz, has a bit of an apology to make:

“Sorry for all the depressing episodes.”

When the Lock and Code podcast explored online harassment and abuse this year, our guest provided several guidelines and tips for individuals to lock down their accounts and remove their sensitive information from the internet, but larger problems remained. Content moderation is failing nearly everywhere, and data protection laws are unequal across the world.

When we told the true tale of a virtual kidnapping scam in Utah, though the teenaged victim at the center of the scam was eventually found, his family still lost nearly $80,000.

And when we asked Mozilla’s Privacy Not Included team about what types of information modern cars can collect about their owners, we were entirely blindsided by the policies from Nissan and Kia, which claimed the companies can collect data about their customers’ “sexual activity” and “sex life.”

(Let’s also not forget about that Roomba that took a photo of someone on a toilet and how that photo ended up on Facebook.)

In looking at these stories collectively, it can feel like the everyday consumer is hopelessly outmatched against modern companies. What good does it do to utilize personal cybersecurity best practices, when the companies we rely on can still leak our most sensitive information and suffer few consequences? What’s the point of using a privacy-forward browser to better obscure my online behavior from advertisers when the machinery that powers the internet finds new ways to surveil our every move?

These are entirely relatable, if fatalistic, feelings. But we are here to tell you that nihilism is not the answer.

Today, on the Lock and Code podcast, we speak with Justin Brookman, director of technology policy at Consumer Reports, about some of the most recent, major consumer wins in the tech world, what it took to achieve those wins, and what levers consumers can pull on today to have their voices heard.

Brookman also speaks candidly about the shifting priorities in today’s legislative landscape.

“One thing we did make the decision about is to focus less on Congress because, man, I’ll meet with those folks so we can work on bills, [and] there’ll be a big hearing, but they’ve just failed to do so much.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Three ways the US could help universities compete with tech companies on AI innovation

The ongoing revolution in artificial intelligence has the potential to dramatically improve our lives—from the way we work to what we do to stay healthy. Yet ensuring that America and other democracies can help shape the trajectory of this technology requires going beyond the tech development taking place at private companies. 

Research at universities drove the AI advances that laid the groundwork for the commercial boom we are experiencing today. Importantly, academia also produced the leaders of pioneering AI companies. 

But today, large foundational models, or LFMs, like ChatGPT, Claude, and Gemini require such vast computational power and such extensive data sets that private companies have replaced academia at the frontier of AI. Empowering our universities to remain alongside them at the forefront of AI research will be key to realizing the field’s long-term potential. This will require correcting the stark asymmetry between academia and industry in access to computing resources.  

Academia’s greatest strength lies in its ability to pursue long-term research projects and fundamental studies that push the boundaries of knowledge. The freedom to explore and experiment with bold, cutting-edge theories will lead to discoveries and innovations that serve as the foundation for future innovation. While tools enabled by LFMs are in everybody’s pocket, there are many questions that need to be answered about them, since they remain a “black box” in many ways. For example, we know AI models have a propensity to hallucinate, but we still don’t fully understand why. 

Because they are insulated from market forces, universities can chart a future where AI truly benefits the many. Expanding academia’s access to resources would foster more inclusive approaches to AI research and its applications. 

The pilot of the National Artificial Intelligence Research Resource (NAIRR), mandated in President Biden’s October 2023 executive order on AI, is a step in the right direction. Through partnerships with the private sector, the NAIRR will create a shared research infrastructure for AI. If it realizes its full potential, it will be an essential hub that helps academic researchers access GPU computational power more effectively. Yet even if the NAIRR is fully funded, its resources are likely to be spread thin. 

This problem could be mitigated if the NAIRR focused on a select number of discrete projects, as some have suggested. But we should also pursue additional creative solutions to get meaningful numbers of GPUs into the hands of academics. Here are a few ideas:

First, we should use large-scale GPU clusters to improve and leverage the supercomputer infrastructure the US government already funds. Academic researchers should be enabled to partner with the US National Labs on grand challenges in AI research. 

Second, the US government should explore ways to reduce the costs of high-end GPUs for academic institutions—for example, by offering financial assistance such as grants or R&D tax credits. Initiatives like New York’s, which make universities key partners with the state in AI development, are already playing an important role at a state level. This model should be emulated across the country. 

Lastly, recent export control restrictions could over time leave some US chipmakers with surplus inventory of leading-edge AI chips. In that case, the government could purchase this surplus and distribute it to universities and academic institutions nationwide.

Imagine the surge of academic AI research and innovation these actions would ignite. Ambitious researchers at universities have a wealth of diverse ideas that are too often stopped short for lack of resources. But supplying universities with adequate computing power will enable their work to complement the research carried out by private industry. Thus equipped, academia can serve as an indispensable hub for technological progress, driving interdisciplinary collaboration, pursuing long-term research, nurturing talent that produces the next generation of AI pioneers, and promoting ethical innovation. 

Historically, similar investments have yielded critical dividends in innovation. The United States of the postwar era cultivated a symbiotic relationship among government, academia, and industry that carried us to the moonseeded Silicon Valley, and created the internet

We need to ensure that academia remains a strong pole in our innovation ecosystem. Investing in its compute capacity is a necessary first step. 

Ylli Bajraktari is CEO of the Special Competitive Studies Project (SCSP), a nonprofit initiative that seeks to strengthen the United States’ long-term competitiveness. 

Tom Mitchell is the Founders University Professor at Carnegie Mellon University. 

Daniela Rus is a professor of electrical engineering and computer science at MIT and director of its Computer Science and Artificial Intelligence Laboratory (CSAIL).

❌
❌