Skip to content
I T S S
  • Welcome
  • Hardware
  • Internet
  • Networking
  • Security
  • Data Recovery
  • Support
  • Contact
  • Webmail

A Nice Little Cryptography Primer

By itss | 28/06/2021
0 Comment

Pun Intended.

Category: Technology
Post navigation
← pfSense / Wireguard / Bad Code / Close Call Why Quake3 was so fast : Fast Inverse Square Root →

Recent Posts

  • Hardware Exploits?
  • Why Quake3 was so fast : Fast Inverse Square Root
  • A Nice Little Cryptography Primer
  • pfSense / Wireguard / Bad Code / Close Call
  • Apple Continues Its Trip To The Dark Side With The Release of MacOS 17 (Big Sur)

Slashdot

News for nerds

  • A Data Center Drained 30 Million Gallons of Water Unnoticed
    by BeauHD on 12/05/2026 at 3:30 am

    A Georgia data center developed by QTS used nearly 30 million gallons of water through two unaccounted-for connections before residents complained about low water pressure and the county utility discovered the issue. "All told, the developer, Quality Technology Services, owed nearly $150,000 for using more than 29 million gallons of unaccounted-for water," reports Politico. "That is equivalent to 44 Olympic-size swimming pools and far exceeds the peak limit agreed to during the data center planning process." From the report: The details were revealed in a May 15, 2025 letter from the Fayette County water system to Quality Technology Services, which outlined the retroactive charge of $147,474. The letter did not specify how many months the unpaid bill covered, but when asked about it Wednesday, Vanessa Tigert, the Fayette County water system director, said it was likely about four months. A QTS spokesperson said the timeframe was 9-15 months. Once the data center was notified, it paid all retroactive charges, a QTS spokesperson said in an email, noting the unmetered water consumption occurred while the county converted its system to smart meters. The Fayette County water system confirmed the data center's meters are now fully integrated and tracked. Tigert, the water system director, blamed the issue on a procedural mix-up. "Fayette County is a suburb, it's mostly residential, and we don't have much commercial meters in our system anyway," she said. "And so we didn't realize our connection point wasn't working." The incident became public last week when a county resident obtained the 2025 letter to QTS through a public records request and posted it on Facebook, prompting outrage from residents concerned about the data center's water consumption. [...] Tigert, who sent the 2025 letter to QTS, said the utility didn't know about the water hookups because the connection process "got mixed up" as the county transitioned to a cloud-based system while also trying to accommodate an industrial customer. Tigert also said her staff is small and at capacity. "Just like any water system, we don't have enough staff. We can't keep staff," she said. "I've got one person that's doing inspections and plan review, and so he's spread pretty thin." She said it's possible her staff did know about hookups but that she hadn't been able to locate the inspection report. "I may have hit 'send' too soon," she said about the 2025 letter to QTS. While the utility charged the data center a higher construction rate for the unapproved water consumption, Tigert confirmed the utility did not penalize or fine the data center. For what it's worth, the Blackstone-owned company says its data centers use a closed-loop cooling system that does not consume water for cooling. The reason for last year's high water use, according to QTS, was the temporary construction work such as concrete, dust control, and site preparation. Once the campus is fully operational, it should only use a small amount of water for things like bathrooms and kitchens. But that point could still be years away, as construction and expansion in Fayetteville may continue for another three to five years. Read more of this story at Slashdot.

  • Digg Tries Again, This Time As an AI News Aggregator
    by BeauHD on 11/05/2026 at 11:00 pm

    Digg is relaunching again, this time as an AI-focused news aggregator rather than the Reddit-style community site it recently abandoned. TechCrunch reports: On Friday evening, the founder previewed a link to the newly redesigned Digg, which now looks nothing like a Reddit clone and more like the news aggregator it once was. This time around, the site is focused on ranking news -- specifically, AI news to start. In an email to beta testers, the company said the site's goal is to "track the most influential voices in a space" and to surface the news that's actually worth "paying attention to." AI is the area it's testing this idea with, but if successful, Digg will expand to include other topics. The email warned that the site was still raw and "buggy," and was designed more to give users a first look than to serve as its public debut. On the current homepage, Digg showcases four main stories at the top: the most viewed story, a story seeing rising discussion, the fastest-climbing story, and one "In case you missed it" headline. Below that is a ranked list of top stories for the day, complete with engagement metrics like views, comments, likes, and saves. But the twist is that these metrics aren't the ones generated on Digg itself. Instead, Digg is ingesting content from X in real-time to determine what's being discussed, while also performing sentiment analysis, clustering, and signal detection to determine what matters most. [...] The site also ranks the top 1,000 people involved in AI, as well as the top companies and the top politicians focused on AI issues. Read more of this story at Slashdot.

  • CUDA Proves Nvidia Is a Software Company
    by BeauHD on 11/05/2026 at 10:00 pm

    Nvidia's real AI moat isn't "a piece of hardware," writes Wired's Sheon Han. It's CUDA: a mature, deeply optimized software ecosystem that keeps machine-learning workloads tied to Nvidia GPUs. An anonymous reader quotes a report from Wired: What sounds like a chemical compound banned by the FDA may be the one true moat in AI. CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say "KOO-duh." So what is this all-important treasure good for? If forced to give a one-word answer: parallelization. Here's a simple example. Let's say we task a machine with filling out a 9x9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column -- one from 1x1 to 1x9, another from 2x1 to 2x9, and so on -- for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity -- 7x9 = 9x7 -- they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts. Nvidia's GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon's scrotum should jiggle at 60 frames per second. CUDA is not a programming language in itself but a "platform." I use that weasel word because, not unlike how The New York Times is a newspaper that's also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations -- added up, they make GPUs, in industry parlance, go brrr. A modern graphics card is not just a circuit board crammed with chips and memory and fans. It's an elaborate confection of cache hierarchies and specialized units called "tensor cores" and "streaming multiprocessors." In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won't run any faster without a capable head chef deftly assigning tasks -- as CUDA does for GPU cores. To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more -- a cherry pitter, a shrimp deveiner -- which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let's say the task is peeling garlic. An unoptimized GPU would go: "Peel the skin with your fingernails." CUDA can instruct: "Smash the clove with the flat of a knife." PTX lets you dictate every sub-instruction: "Lift the blade 2.35 inches above the cutting board, make it parallel to the clove's equator, and strike downward with your palm at a force of 36.2 newtons." "You can begin to see why CUDA is so valuable to Nvidia -- and so hard for anyone else to touch," writes Han. "Tuning GPU performance is a gnarly problem. You can't just conscript some tender-footed undergrad on Market Street, hand them a Claude Max plan, and expect them to hack GPU kernels. Writing at this level is a grindsome enterprise -- unless you're a cracker-jack programmer at DeepSeek..." Han goes on to argue that rivals like AMD and Intel offer competitive specs on paper, but their software stacks have struggled with bugs, compatibility issues, and weak adoption. As a result, Nvidia has built an Apple-like moat around AI computing, leaving the industry dependent on its expensive hardware. Read more of this story at Slashdot.

  • Anthropic's Bug-Hunting Mythos Was Greatest Marketing Stunt Ever, Says cURL Creator
    by BeauHD on 11/05/2026 at 9:00 pm

    cURL creator Daniel Stenberg says Anthropic's hyped Mythos bug-hunting model found only one confirmed low-severity vulnerability in cURL, plus a few non-security bugs, after he expected a much longer list. He argues Mythos may be useful, but not meaningfully beyond other modern AI code-analysis tools. "My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing," Stenberg said a blog post. "I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos." He went on to call Mythos "an amazingly successful marketing stunt for sure." The Register reports: Stenberg explained in a Monday blog post that he was promised access to Anthropic's Mythos model - sort of - through the AI biz's Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl's codebase and later sent him a report. "It's not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway," Stenberg explained. "Getting the tool to generate a first proper scan and analysis would be great, whoever did it." That scan, which analyzed curl's git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were "confirmed security vulnerabilities" in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report "felt like nothing," and that feeling was further validated by a review of Mythos' findings. "Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability," Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. "The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June," the cURL meister noted. "The flaw is not going to make anyone grasp for breath." Read more of this story at Slashdot.

  • GM Cutting Hundreds of Salaried IT Workers As It Trims Costs, Evaluates Needs
    by BeauHD on 11/05/2026 at 8:00 pm

    GM is laying off about 500 to 600 salaried IT workers, mainly in Austin, Texas, and Warren, Michigan, as it restructures its technology organization and trims costs. "GM is transforming its Information Technology organization to better position the company for the future. As part of that work, we have made the difficult decision to eliminate certain roles globally. We are grateful for the contributions of the employees affected and are committed to supporting them through this transition," the automaker said in an emailed statement. CNBC reports: GM reported employing about 68,000 salaried workers globally as of the end of last year, including 47,000 white-collar employees in the U.S. Despite Monday's cuts, GM still is still hiring IT workers. The company has 82 open IT positions that include positions working in artificial intelligence, motorsports and autonomous vehicles, according to the automaker's careers website. Read more of this story at Slashdot.

  • iPhone-Android RCS Conversations Are End-To-End Encrypted In iOS 26.5
    by BeauHD on 11/05/2026 at 7:00 pm

    Apple says end-to-end encryption for RCS messages between iPhone and Android is now available in iOS 26.5, though the feature is still considered beta and depends on carrier support on both sides. MacRumors reports: Apple says that it worked with Google to lead a cross-industry effort to add E2EE to RCS. iOS users will need iOS 26.5, while Android users will need the latest version of Google Messages. End-to-end encryption is on by default, and there is a toggle for it in the Messages section of the Settings app. Encrypted messages are denoted with a small lock symbol. On iPhones not running iOS 26.5, RCS messages between iPhone and Android users do not have E2EE, but the new update will put Android to iPhone conversations on par with iPhone to iPhone conversations that are encrypted through iMessage. Along with Google, Apple worked with the GSM Association to implement E2EE for RCS messages. E2EE is part of the RCS Universal Profile 3.0, published with Apple's help and built on the Messaging Layer Security protocol. RCS Universal Profile 3.0 also includes editing and deleting messages, cross-platform Tapback support, and replying to specific messages inline during cross-platform conversations. Read more of this story at Slashdot.

Archives

  • September 2022
  • November 2021
  • June 2021
  • March 2021
  • November 2020
  • October 2020
  • September 2020
  • February 2020
  • January 2020
  • October 2019
  • August 2018
  • July 2018
  • April 2018
  • February 2018
  • January 2018
  • December 2017
  • October 2017
  • September 2017
  • August 2016
  • July 2016
  • March 2016
  • February 2016
  • August 2015
  • May 2015

Categories

  • Innovation
  • Security
  • Software
  • Technology

Tags

backdoor cisco coding json laziness patterns public information announcement security vulnerability
© 2017 IT Sales & Services Ltd
Quality IT solutions in Tanzania since 2010
Iconic One Theme | Powered by Wordpress