Skip to content
I T S S
  • Welcome
  • Hardware
  • Internet
  • Networking
  • Security
  • Data Recovery
  • Support
  • Contact
  • Webmail

A Nice Little Cryptography Primer

By itss | 28/06/2021
0 Comment

Pun Intended.

Category: Technology
Post navigation
← pfSense / Wireguard / Bad Code / Close Call Why Quake3 was so fast : Fast Inverse Square Root →

Recent Posts

  • Hardware Exploits?
  • Why Quake3 was so fast : Fast Inverse Square Root
  • A Nice Little Cryptography Primer
  • pfSense / Wireguard / Bad Code / Close Call
  • Apple Continues Its Trip To The Dark Side With The Release of MacOS 17 (Big Sur)

Slashdot

News for nerds

  • AI Models Are Starting To Learn By Asking Themselves Questions
    by BeauHD on 10/01/2026 at 3:30 am

    An anonymous reader quotes a report from Wired: [P]erhaps AI can, in fact, learn in a more human way -- by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code. The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them. The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data. [...] A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent's actions are correct. One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. "Once we have that it's kind of a way to reach superintelligence," [said Zilong Zheng, a researcher at BIGAI who worked on the project]. Read more of this story at Slashdot.

  • AI Is Intensifying a 'Collapse' of Trust Online, Experts Say
    by BeauHD on 10/01/2026 at 2:02 am

    Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her. The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces." Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said. "In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away." Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth." Read more of this story at Slashdot.

  • Intel Is 'Going Big Time Into 14A,' Says CEO Lip-Bu Tan
    by BeauHD on 10/01/2026 at 1:25 am

    Intel CEO Lip-Bu Tan says the company is "going big time" into its 14A (1.4nm-class) process, signaling confidence in yields and hinting at at least one external foundry customer. Tom's Hardware reports: Intel's 14A is expected to be production-ready in 2027, with early versions of process design kit (PDK) coming to external customers early this year. To that end, it is good to hear Intel's upbeat comments about 14A. Also, Tan's phrasing 'the customer' could indicate that Intel has at least one external client for 14A, implying that Intel Foundry will produce 14A chips for Intel Products and at least one more buyer. The 14A production node will introduce Intel's 2nd Generation RibbonFET GAA transistors; 2nd Gen BSPDN called PowerDirect that will connect power directly to source and drain of transistors, enabling better power delivery (e.g., reducing transient voltage droop or clock stretching) and refined power controls; and Turbo Cells that optimize critical timing paths using high-drive, double-height cells within dense standard cell libraries, which boost speed without major area or power compromises. Yet, there is another aspect of Intel's 14A manufacturing process that is particularly important for the chipmaker: its usage by external customers. With 18A, the company has not managed to land a single major external client that demands decent volumes. While 18A will be used by Intel itself as well as by Microsoft and the U.S. Department of Defense, only Intel will consume significant volumes. For 14A, Intel hopes to land at least one more external customer with substantial volume requirements, as this will ensure that Intel will recoup its investments in the development of such an advanced node. Read more of this story at Slashdot.

  • Microsoft May Soon Allow IT Admins To Uninstall Copilot
    by BeauHD on 10/01/2026 at 12:45 am

    Microsoft is testing a new Windows policy that lets IT administrators uninstall Microsoft Copilot from managed devices. The change rolls out via Windows Insider builds and works through standard management tools like Intune and SCCM. BleepingComputer reports: The new policy will apply to devices where the Microsoft 365 Copilot and Microsoft Copilot are both installed, the Microsoft Copilot app was not installed by the user, and the Microsoft Copilot app was not launched in the last 28 days. "Admins can now uninstall Microsoft Copilot for a user in a targeted way by enabling a new policy titled RemoveMicrosoftCopilotApp," the Windows Insider team said. "If this policy is enabled, the Microsoft Copilot app will be uninstalled, once. Users can still re-install if they choose to. This policy is available on Enterprise, Pro, and EDU SKUs. To enable this policy, open the Group policy editor and go to: User Configuration -> Administrative Templates -> Windows AI -> Remove Microsoft Copilot App." Read more of this story at Slashdot.

  • Google: Don't Make 'Bite-Sized' Content For LLMs If You Care About Search Rank
    by BeauHD on 10/01/2026 at 12:02 am

    An anonymous reader quotes a report from Ars Technica: Search engine optimization, or SEO, is a big business. While some SEO practices are useful, much of the day-to-day SEO wisdom you see online amounts to superstition. An increasingly popular approach geared toward LLMs called "content chunking" may fall into that category. In the latest installment of Google's Search Off the Record podcast, John Mueller and Danny Sullivan say that breaking content down into bite-sized chunks for LLMs like Gemini is a bad idea. You've probably seen websites engaging in content chunking and scratched your head, and for good reason -- this content isn't made for you. The idea is that if you split information into smaller paragraphs and sections, it is more likely to be ingested and cited by gen AI bots like Gemini. So you end up with short paragraphs, sometimes with just one or two sentences, and lots of subheads formatted like questions one might ask a chatbot. According to Google's Danny Sullivan, this is a misconception, and Google doesn't use such signals to improve ranking. "One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?" said Sullivan. "So... we don't want you to do that." The conversation, which begins around the podcast's 18-minute mark, goes on to illustrate the folly of jumping on the latest SEO trend. Sullivan notes that he has consulted engineers at Google before making this proclamation. Apparently, the best way to rank on Google continues to be creating content for humans rather than machines. That ensures long-term search exposure, because the behavior of human beings -- what they choose to click on -- is an important signal for Google. Read more of this story at Slashdot.

  • CES Worst In Show Awards Call Out the Tech Making Things Worse
    by BeauHD on 09/01/2026 at 11:20 pm

    Longtime Slashdot reader chicksdaddy writes: CES, the Consumer Electronics Show, isn't just about shiny new gadgets. As AP reports, this year brought back the fifth annual Worst in Show anti-awards, calling out the most harmful, wasteful, invasive, and unfixable tech at the Las Vegas show. The coalition behind the awards -- including Repair.org, iFixit, EFF, PIRG, Secure Repairs, and others -- put the spotlight on products that miss the point of innovation and make life worse for users. 2026 Worst in Show winners include: Overall (and Repairability): Samsung's AI-packed Family Hub Fridge -- over-engineered, hard to fix, and trying to do everything but keep food cold. Privacy: Amazon Ring AI -- expanding surveillance with features like facial recognition and mobile towers. Security: Merach UltraTread treadmill -- an AI fitness coach that also hoovers up sensitive data with weak security guarantees, including a privacy policy that declares the company "cannot guarantee the security of your personal information" (!!). Environmental Impact: Lollipop Star -- a single-use, music-playing electronic lollipop that epitomizes needless e-waste. Enshittification: Bosch eBike Flow App -- pushing lock-in and digital restrictions that make gear worse over time. "Who Asked For This?": Bosch Personal AI Barista -- a voice-assistant coffee maker that nobody really wanted. People's Choice: Lepro Ami AI Companion -- an overhyped "soulmate" cam that creeps more than it comforts. The message? Not all tech is progress. Some products add needless complexity, threaten privacy, or throw sustainability out the window -- and the industry's watchdogs are calling them out. Read more of this story at Slashdot.

Archives

  • September 2022
  • November 2021
  • June 2021
  • March 2021
  • November 2020
  • October 2020
  • September 2020
  • February 2020
  • January 2020
  • October 2019
  • August 2018
  • July 2018
  • April 2018
  • February 2018
  • January 2018
  • December 2017
  • October 2017
  • September 2017
  • August 2016
  • July 2016
  • March 2016
  • February 2016
  • August 2015
  • May 2015

Categories

  • Innovation
  • Security
  • Software
  • Technology

Tags

backdoor cisco coding json laziness patterns public information announcement security vulnerability
© 2017 IT Sales & Services Ltd
Quality IT solutions in Tanzania since 2010
Iconic One Theme | Powered by Wordpress