High-level architecture of Frankenstein. Image: Kevin W. Hamlen More information: Kevin W. Hamlen’s page: www.utdallas.edu/~hamlen/research.htmlPress release (Phys.org)—In the ever ongoing struggle between good and evil, or in this case, the battle between those that create malware and those that seek to detect and destroy it, the good guys appear to have mimicked the bad by creating a computer virus that can evade detection by building itself from pieces of code that normally reside harmlessly on people’s computers. The result, the team of Vishwath Mohan and Kevin Hamlen of the University of Texas, say, is a cyber version of Frankenstein’s monster. © 2012 Phys.org Citation: Researchers create ‘Frankenstein’ malware made up of common gadgets (2012, August 21) retrieved 18 August 2019 from https://phys.org/news/2012-08-frankenstein-malware-common-gadgets.html Bitdefender researchers find evidence of viruses infecting worms creating new form of malware Explore further The research, which was partly funded by the US Air Force, was described to attendees at this year’s USENIX Workshop on Offensive Technologies. There the team said their aim in creating the malware was to see if it might be possible to create a virus that is made up of nothing but gadgets, snippets of code used by such commonly installed programs as Internet Explorer or Notepad. Theoretical research over the past several years suggested it could be done. The overall purpose of such a project would be to see if using the technique could result in the creation of a virus that could not be detected by conventional anti-virus programs. And it seems the answer is yes, though the malware the team created isn’t a virus in the technical sense because it doesn’t cause any harm, it’s merely a proof of concept. Their code resulted in the creation of new code made from gadgets that ran two harmless algorithms. But, of course, those algorithms could just as easily been very, very harmful. One of the more clever aspects of the code the team created was the part where the original kernel, the part that infects the computer, was itself modified and caused to look like part of a normal gadget, thus, leaving no trace of itself to be found. The point, of course, in creating new kinds of malware is to help people on the right side of the law stay one step ahead of those that hide in the dark toiling in earnest to conceive and construct ever more malevolent software that once unleashed might prey on others and do their bidding. Getting there first allows researchers time to build ways to circumvent such malware before the bad guys figure out how to do it themselves. In this case, some have suggested the best way to detect the new so-called undetectable malware is by creating security software that is able to detect objectionable behavior by code, rather than scanning it for identifying markers, which is how virtually all anti-virus software currently find infections on computer systems. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Today, OpenAI have released the versions of GPT-2, which is a new AI model. GPT-2 is capable of generating coherent paragraphs of text without needing any task-specific training. The release includes a medium 345M version and a small 117M version of GPT-2. They have also shared the 762M and 1.5B versions with partners in the AI and security communities who are working to improve societal preparedness for large language models. The earlier version release of GPT was in the year 2018. In February 2019, Open-AI had made an announcement about GPT-2 with many samples and policy implications. Read More: OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words The team at OpenAI has decided on a staged release of GPT-2. Staged release will have the gradual release of family models over time. The reason behind the staged release of GPT-2 is to give people time to assess the properties of these models, discuss their societal implications, and evaluate the impacts of release after each stage. The 345M parameter version of GPT-2 has improved performance relative to the 117M version, though it does not offer much ease of generating coherent text. Also it would be difficult to misuse the 345M version. Many factors like ease of use for generating coherent text, the role of humans in the text generation process, the likelihood and timing of future replication and publication by others, evidence of use in the wild and expert-informed inferences about unobservable uses, etc were considered while releasing this staged 345M version. The team is hopeful that the ongoing research on bias, detection, and misuse will boost them to publish larger models and in six months, they will share a fuller analysis of language models’ societal implications and the heuristics for release decisions. The team at OpenAI is looking for partnerships with academic institutions, non-profits, and industry labs which will focus on increasing societal preparedness for large language models. They are also open to collaborating with researchers working on language model output detection, bias, and publication norms, and with organizations potentially affected by large language models. The output dataset contains GPT-2 outputs from all 4 model sizes, with and without top-k truncation, as well as a subset of the WebText corpus used to train GPT-2. The dataset features approximately 250,000 samples per model/hyperparameter pair, which will be sufficient to help a wider range of researchers perform quantitative and qualitative analysis. To know more about the release, head over to the official release announcement. Read Next OpenAI introduces MuseNet: A deep neural network for generating musical compositions OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes OpenAI Five bots destroyed human Dota 2 players this weekend