Skip to content
I T S S
  • Welcome
  • Hardware
  • Internet
  • Networking
  • Security
  • Data Recovery
  • Support
  • Contact
  • Webmail

A Nice Little Cryptography Primer

By itss | 28/06/2021
0 Comment

Pun Intended.

Category: Technology
Post navigation
← pfSense / Wireguard / Bad Code / Close Call Why Quake3 was so fast : Fast Inverse Square Root →

Recent Posts

  • Hardware Exploits?
  • Why Quake3 was so fast : Fast Inverse Square Root
  • A Nice Little Cryptography Primer
  • pfSense / Wireguard / Bad Code / Close Call
  • Apple Continues Its Trip To The Dark Side With The Release of MacOS 17 (Big Sur)

Slashdot

News for nerds

  • White House Considers Vetting AI Models Before They Are Released
    by BeauHD on 04/05/2026 at 11:00 pm

    The Trump administration is reportedly considering an executive order to create a working group that could review advanced AI models before public release. The shift follows concerns over Anthropic's powerful Mythos model and its cyber capabilities, with officials weighing whether the government should get early access to frontier models without necessarily blocking their release. The New York Times reports: In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said. The discussions signal a stark reversal in the Trump administration's approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born," Mr. Trump said of A.I. at an event in July. "We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules." Mr. Trump left room for some rules, but he added that "they have to be more brilliant than even the technology itself." The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said. Read more of this story at Slashdot.

  • OpenAI, Google, and Microsoft Back Bill To Fund 'AI Literacy' In Schools
    by BeauHD on 04/05/2026 at 10:00 pm

    An anonymous reader quotes a report from 404 Media: A new, bipartisan bill introduced (PDF) by Democratic Senator of California Adam Schiff and endorsed by the biggest AI developers in the world -- including OpenAI, Google, and Microsoft -- would change the K-12 curriculum to shoehorn in "AI literacy," something that young people and teachers alike already hate in schools. The Literacy in Future Technologies Artificial Intelligence, or LIFT AI Act, would empower the new director of the National Science Foundation (NSF) to make grant awards "on a merit-reviewed, competitive basis to institutions of higher education or nonprofit organizations (or a consortium thereof) to support research activities to develop educational curricula, instructional material, teacher professional development, and evaluation methods for AI literacy at the K-12 level," the bill says. It defines AI literacy as using AI; specifically, "having the age-appropriate knowledge and ability to use artificial intelligence effectively, to critically interpret outputs, to solve problems in an AI-enabled world, and to mitigate potential risks." The bill is endorsed by the American Federation of Teachers, Google, OpenAI, Information Technology Industry Council, Software & Information Industry Association, Microsoft, and HP Inc. [...] The grant would support "AI literacy evaluation tools and resources for educators assessing proficiency in AI literacy," according to the bill. It would also fund "professional development courses and experiences in AI literacy," and the development of "hands-on learning tools to assist in developing and improving AI literacy." Most importantly for real-world implications, it would fund changing the existing curriculum "to incorporate AI literacy where appropriate, including responsible use of AI in learning." Read more of this story at Slashdot.

  • The Pixel 11 Could Be the Next Victim of the RAM Shortage
    by BeauHD on 04/05/2026 at 9:00 pm

    Google's Pixel 11 lineup could see RAM cuts or lower starting configurations because of the global memory shortage, with leaks suggesting the base model may drop from 12GB to 8GB while Pro models could add 12GB versions below the current 16GB tier. The Verge reports: There will be 16GB configurations available for each, but adding a lower-spec model could mean the 16GB version is getting a price hike. However, the silver lining is that the specs from MysticLeaks also include camera upgrades and brighter displays for the Pro models. The RAM shortage is pushing other phone makers, including Samsung, to raise prices, too. Read more of this story at Slashdot.

  • Expanded AMD HDMI 2.1 Support Is Coming To Linux
    by BeauHD on 04/05/2026 at 8:00 pm

    AMD is preparing expanded HDMI 2.1 support for Linux, following earlier delays after the HDMI Forum rejected an open source implementation of HDMI 2.1 as proprietary technology. As GamingOnLinux reports, AMD developer Harry Wentland submitted a patch series to the Linux kernel mailing list, noting that it brings "HDMI FRL support to the amdgpu display driver" and that "DSC is still being tested and will be sent out later." A forum post on Phoronix from an AMD driver developer also said "a full implementation will ultimately be available once the patches are ready and have completed compliance testing." Read more of this story at Slashdot.

  • The Audio Industry Is Grappling With the Rise of 'Podslop'
    by BeauHD on 04/05/2026 at 7:00 pm

    An anonymous reader quotes a report from Bloomberg's Ashley Carman: Welcome to the modern era of podcasting in which thousands of new shows are released into the world every day with a sizable portion likely being AI-generated. Figuring out exactly which ones fall into that growing category is becoming more difficult just as the industry is starting to take this issue seriously. In only the past month or so, Amazon launched a feature that explains a product by generating a quasi-podcast, complete with co-hosts talking to each other and taking questions from users. Shout out to Business Insider reporter Katie Notopoulos for spotting this (and, naturally, demoing it with an adult diaper rash-cream). Not long ago, Nicholas Thompson, chief executive officer of the Atlantic, noted "podslop" dominated his Spotify search results when he typed in the word "Sora." This was around the time that OpenAI shut down its user-generated, AI-content-only app. [...] All of which raises some big, difficult questions. For one, what should the listening platforms do about this incursion? As of right now, Apple Podcasts requires creators who generated a "material portion" of their show using AI to disclose it. The platform also bans misleading or deceptive content. Spotify hasn't published any specific guidelines around AI, though it maintains general rules around dangerous and misleading content. Where this conversation gets even trickier is when it comes to money. Many of these podcasts are hosted on at least one free service that allows programs to opt into their ad marketplace with zero barrier to entry, meaning these shows (and the hosting service) profit off every listen or download. Spreaker, a company owned by iHeartMedia, is the primary one to watch here. Though it tells users to disclose when they rely on AI, it still allows those shows to opt into its programmatic ad marketplace, which pays creators 60% of the revenue generated by the ads placed in their shows. It stands to reason that most of these thousands of shows don't reach many people. But in the aggregate, the ears and dollars could add up. Are the advertisers on board with being next to AI-generated content, some of which might be deemed "slop?" There's also the question of how to define "slop." Jackson of the Podcast Index and his co-host Adam Curry treat it as something listeners simply know when they hear it, while Alberto Betella, co-founder of RSS.com, defines it as "fully automated content with no human review." Jeanine Wright, co-founder of Inception Point, rejects the debate altogether: "The people still talking about slop are still making 6-7 jokes," she said. "It's still yesterday's conversation." Read more of this story at Slashdot.

  • Anthropic Nears $1.5 Billion AI Joint Venture With Wall Street Firms
    by BeauHD on 04/05/2026 at 6:00 pm

    Anthropic is reportedly nearing a roughly $1.5 billion joint venture with Blackstone, Goldman Sachs, Hellman & Friedman, and other Wall Street firms to sell AI tools to private-equity-backed companies. "The investors aim to create a company that acts as a consulting arm for Anthropic and helps teach businesses -- including the private-equity firms' portfolio companies -- how to incorporate AI across their operations," reports the Wall Street Journal. Anthropic, Blackstone, and Hellman & Friedman would each invest about $300 million, while Goldman would contribute around $150 million. Read more of this story at Slashdot.

Archives

  • September 2022
  • November 2021
  • June 2021
  • March 2021
  • November 2020
  • October 2020
  • September 2020
  • February 2020
  • January 2020
  • October 2019
  • August 2018
  • July 2018
  • April 2018
  • February 2018
  • January 2018
  • December 2017
  • October 2017
  • September 2017
  • August 2016
  • July 2016
  • March 2016
  • February 2016
  • August 2015
  • May 2015

Categories

  • Innovation
  • Security
  • Software
  • Technology

Tags

backdoor cisco coding json laziness patterns public information announcement security vulnerability
© 2017 IT Sales & Services Ltd
Quality IT solutions in Tanzania since 2010
Iconic One Theme | Powered by Wordpress