Super Search

Custom Search

Landing sites on Europa identified

Galileo got there first (Image: NASA/JPL/University of Arizona/University of Colorado)

Galileo got there first (Image: NASA/JPL/University of Arizona/University of Colorado)

A RIGOROUS analysis of the jagged terrain of Jupiter's moon Europa is helping to identify safe landing strips for future missions.

Europa is thought to have an ocean of water beneath its icy shell. The latest study is the first to use images from the Galileo spacecraft, which orbited Jupiter from 1995 to 2003, to generate measurements of Europa's slopes. "This is the first quantitative sampling that gives hard numbers, real numbers that you can believe," says Paul Schenk of the Lunar and Planetary Institute in Houston, Texas.

Schenk used shadows, plus pictures taken from two different angles, combined into 3D images, to calculate the slopes of various regions of Europa. He examined four different kinds of terrain: ridged plains that make up the majority of the surface; impact craters; so-called "chaos" regions where icebergs appear to float in a frozen soup; and long smooth stripes called dilational bands.

Chaos regions and impact craters are particularly exciting for planetary scientists since liquid water from a subsurface ocean may have burst through at these points, making it possible to search for evidence of life without having to drill below the surface.

These sites are bad news for landers, though. Up to half the landscape in these regions was tilted more than 10 degrees, a similar incline is making life difficult for the Mars rover Spirit. The steepest slopes can reach 20 or 30 degrees. Even the ridged plains have rounded tops that could pose problems for landers.

The only smooth features were the dilational bands, which slope at about 5 degrees or less. These broad tracks, tens of kilometres wide and hundreds of kilometres long, form when cracks in the ice shell open in response to the gravitational pull of Jupiter and the other large moons. The cracks then fill with water and open even further, leaving smooth tracks in between. "It's a little bit like a mid-ocean ridge spreading on Earth," Schenk says.

These areas could be smoother because they didn't form as violently as impact craters, or because the upwelling water smoothed over whatever rough patches were there. Thanks to regular flooding, the cracks could also harbour life. "These bands are one of the places that a future project might decide it wanted to land," Schenk says.

Europa was also recently selected as the target for an orbiting mission. The orbiter will finish mapping Europa's surface, where Galileo left off. "The Galileo antenna malfunctioned," says Schenk. "They could only map about 15 per cent of the moon at resolutions that are worth mapping."

"The issue of topography is very important as we put together the objectives for the Europa orbiter mission," says Bob Pappalardo at NASA's Jet Propulsion Laboratory in Pasadena, who is working on the Jupiter Europa Orbiter scheduled for launch in 2020.

Muscular blob shows new direction for tissue engineering

A microscope view of the new, controllable blob of muscle proteins (Image: Harvard University)

A microscope view of the new, controllable blob of muscle proteins (Image: Harvard University)



A quivering blob of muscle proteins in a Harvard lab could lead to controllable biomaterials to replace damaged body tissue.

Under a microscope, the "active gel" looks like a throbbing tangle of fibres immersed in jelly. Created by David Weitz and his colleagues at Harvard University, it is made from a molecular net of the muscle protein actin held into shape by another protein, filamin. Each actin strand has around 300 molecules of another muscle protein, myosin, attached.

The gel stiffens when exposed to ATP, the chemical that cells use to store and release energy. It becomes 1000 times firmer, a change in elasticity of the same order as Jell-O setting, says Weitz.

The myosin molecules flex like miniature biceps, bunching up the actin strands and causing the network to "tense up".

Natural mover

"What we're trying to do is unravel the design principles that nature uses to make mechanical structures," says Weitz.

Unlike the materials typically used by engineers, which have fixed properties, many natural materials and structures can adapt theirs as circumstance require. Muscle is a good example, says Weitz, and the network he has created is a step toward replicating such properties.

"This bridges ideas that have been out there," says Margaret Gardel, a researcher at the University of Chicago not involved with the work. The blob is similar to the adaptable but tough protein skeleton that as well as holding cells in shape also allows them to shape-shift as required, she says.

Weitz thinks his active gel design could be used to give a new twist to tissue engineering, which usually involves using a static scaffold to guide the growth of replacement tissues from stem cells.

Scaffolds with tunable elasticity could allow more complex structures to be grown, says Weitz. For example, a floppy, untensed blob could be moved into position and then set in place with a pulse of ATP.

Because the physical properties of nearby surfaces are known to affect what kind of tissue stem cells grow into, a scaffold with controllable stiffness could direct a collection of stems cells to grow into different cell types to sculpt more intricate tissues that contain different kinds of cell.

Disrupt emergency exits to boost evacuation rates

Obstructing the exit could save lives (Image: Scott Craig/cancerbot/StockXchng)

Obstructing the exit could save lives (Image: Scott Craig/cancerbot/StockXchng)



Need to evacuate people quickly through a narrow opening? Put something in their way.

Physicists timed a crowd of 50 women as they exited as fast as possible through a door, and then repeated the experiment with a 20-centimetre-wide pillar placed 65 centimetres in front of the exit to the left-hand side.

The obstacle improved the exit rate by an extra seven people per minute – from 2.8 people to 2.92 people per second.

Daichi Yanagisawa at the University of Tokyo, Japan, who led the research team, explains that the pillar creates a relatively uncrowded area where it's needed most – just in front of the exit.

Usually, the exit becomes clogged by people competing for the small space, and the crowd is slowed. The pillar blocks pedestrians arriving at the exit from the left so effectively that the number of people attempting to occupy the space just in front of the exit is reduced, says Yanagisawa. With reduced crowding there are fewer conflicts and the outflow rate increases.

But the positioning of the pillar is crucial, says Yanagisawa. When the researchers moved the pillar so that it stood directly in front of the exit's centre, rather than to the left, the outflow rate dropped to 2.78.

That's because there's a second factor influencing outflow rate, dubbed the turning function. As pedestrians approach the busy doorway they weave and duck to squeeze through the crowd. With every turn they lose momentum and their walking speed decreases, which reduces the rate of outflow through the exit

With the pillar offset to the left, the turning function of pedestrians approaching the exit from the left increases. Although they take longer to reach the exit, the total effect is an increase in outflow rate since those approaching from the centre or the right have a comparatively free and empty route to the exit.

But if the pillar is central, it affects the turning function of most pedestrians approaching the exit. Because more pedestrians are slowed down by the obstacle, the total outflow rate drops.

Mystery of the missing mini-galaxies

Something missing? (Image: NASA/ESA/STScI/AURA/A. Aloisi)

Something missing? (Image: NASA/ESA/STScI/AURA/A. Aloisi)

LIKE moths about a flame, thousands of tiny satellite galaxies flutter about our Milky Way. For astronomers this is a dream scenario, fitting perfectly with the established models of how our galaxy's cosmic neighbourhood should be. Unfortunately, it's a dream in more ways than one and the reality could hardly be more different.

As far as we can tell, barely 25 straggly satellites loiter forlornly around the outskirts of the Milky Way. "We see only about 1 per cent of the predicted number of satellite galaxies," says Pavel Kroupa of the University of Bonn in Germany. "It is the cleanest case in which we can see there is something badly wrong with our standard picture of the origin of galaxies."

It isn't just the apparent dearth of galaxies that is causing consternation. At a conference earlier this year in the German town of Bad Honnef, Kroupa and his colleagues presented an analysis of the location and motion of the known satellite galaxies. They reported that most of those galaxies orbit the Milky Way in an unexpected manner and that, taken together, their results are at odds with mainstream cosmology. There is "only one way" to explain the results, says Kroupa: "Gravity has to be stronger than predicted by Newton."

Challenging Newton's description of gravity is controversial. But regardless of where the truth lies, the Milky Way's satellite galaxies have become the latest battleground between the proponents of dark matter and theories of modified gravity.

Our standard picture of the universe comes from many decades of observations. It asserts that visible matter - the kind of stuff that you, me, the planets and stars are made of - is outweighed by a factor of 6 or 7 by invisible, cold dark matter. No one knows what dark matter is made of, but its existence has been postulated to explain how the stars in spiral galaxies can orbit at such breakneck speeds without being flung off into the void. There isn't enough ordinary matter out there to hold on to everything, so the extra gravitational grip provided by large amounts of dark matter stops these speeding stars flying off into space.

Dark matter is also thought to have played a key role in shaping the early universe. In the aftermath of the big bang, it was the dark stuff that first began to clump together under the force of gravity because its lack of interaction with light meant it was not blasted apart by the big-bang fireball. Later on, normal gaseous matter fell into these clumps - dubbed dark matter haloes - where it congealed into stars to make visible galaxies.

A key feature of this dark matter scenario is that dark matter haloes of all sizes form. According to the standard model of cosmology, a halo as large as the one thought to have seeded the Milky Way should be surrounded by thousands of mini haloes, which themselves should have seeded small satellite galaxies.

So why don't we see them? It could simply be because most of the satellite galaxies contain only a few thousand stars and their faintness makes them extremely hard to spot (see New Scientist, 15 August, p 10).

Another problem is that it is not obvious to the human eye that an apparent group of stars in the sky is a bound collection rather than a chance alignment of stars at wildly different distances. Proving their connectedness requires computerised search techniques and detailed analyses of the colours of the stars to give their relative distances and types - a painstaking and expensive business.

Tidal dwarfs

Nevertheless, the rate of discovery of satellite galaxies has been boosted in the past five years by a detailed search by the Sloan Digital Sky Survey. Whereas only nine satellites were discovered in the 30 years before SDSS, another 15 have been found since. The biggest are about 1000 light years across - less than 1 per cent of the diameter of the Milky Way's disc - and the smallest about 150 light years across. Despite this progress, the total number of satellites known falls far short of that predicted by the cold dark matter paradigm.

The missing-satellites problem is not the only puzzle. Kroupa and his Bonn colleague Manuel Metz, together with Gerhard Hensler at the University of Vienna, Austria, and Helmut Jerjen of Mount Stromlo Observatory near Canberra, Australia, have studied the location and motion of the small number of known satellite galaxies. They found that a high proportion of the galaxies appear to be confined to a plane perpendicular to the disc of the Milky Way. What's more, most of the galaxies orbit the Milky Way in the same direction. "This is completely incompatible with the dark matter model of the Milky Way's formation," says Kroupa. He points out that the satellites should be more like a swarm of bees, moving on random orbits and distributed in a spherical shell around our galaxy.

Robot with bones moves like you do

Like looking in the mirror (Image: The Robot Studio)

Like looking in the mirror (Image: The Robot Studio)

YOU may have more in common with this robot than any other - it was designed using your anatomy as a blueprint.

Conventional humanoid robots may look human, but the workings under their synthetic skins are radically different from our anatomy. A team with members across five European countries says this makes it difficult to build robots able to move like we do.

Their project, the Eccerobot, has been designed to duplicate the way human bones, muscles and tendons work and are linked together. The plastic bones copy biological shapes and are moved by kite-line that is tough like tendons, while elastic cords mimic the bounce of muscle.

Mimicking human anatomy is no shortcut to success, though, as even simple human actions like raising an arm involve a complex series of movements from many of the robot's bones, muscles and tendons. However, the team is convinced that solving these problems will enable the construction of a machine that interacts with its environment in a more human manner.

Simple human actions like raising an arm involve a complex series of movements for the robot

"We want to develop these ideas into a new kind of 'anthropomimetic robot' which can deal with and respond to the world in ways closer to the ways that humans do," says Owen Holland at the University of Sussex, UK, who is leading the project.

The team also intends to endow the robot with some human-like artificial intelligence.

Second robot deployed to help free stuck Mars rover


In the struggle to free the Mars rover Spirit from a sand trap, NASA engineers are bringing out the reserve troops. A second, lighter duplicate rover slid into a sandbox for testing this week, delaying any attempt to free Spirit by as much as three weeks, to mid-September.

Spirit has been stuck in a sandpit for nearly four months. Since late June, engineers have been trying to determine the best moves to extricate it by driving a test rover around a sandbox at NASA's Jet Propulsion Laboratory (JPL) in California.

Rover engineers had earlier announced that they were almost done with the testing and would be ready to move Spirit around 10 August, but they backtracked in a meeting on 6 August.

"We've come up with additional tests that we want to do, and additional computer modelling that we want to do as well," says rover project manager John Callas at JPL. "Now we're looking at the middle of September."

Wacom Intuos4

Wacom Intuos4
Wacom Intuos4 : Full Set Wacom Intuos4 : Bottom Wacom Intuos4 : Side


Pen tablets aren't just for commercial artists anymore. For proof, look no further than the just-released Wacom Intuos4 ($230 and up, street, depending on size), a worthy addition to the toolkits of professionals, casual artists, and photography enthusiasts alike. Used with Adobe Photoshop CS4 and Corel Painter 11, the Intuos4's impressive pressure sensitivity lets you lighten or darken areas of an image with precision. Putting the tablet to work with Photoshop's dodge, burn, blur, and sharpen tools gives you the kind of personal expression that's associated with a photographic darkroom. And if you're used to drawing with traditional art materials, such as chalk or watercolors, you'll find that the combination of Intuos4 and Painter comes remarkably close to that experience. Wacom is pretty much the only game in town when it comes to pen tablets, but the impressive Intuos4 proves that the lack of competition hasn't made the company lazy.

For this review, Wacom provided me with the medium version of the Intuos4. Think of this midsized, midpriced pen tablet as the Goldilocks model—not too large, not too expensive. The active area of the 10-by-14.6-inch (HW) pad, at 5.5 by 8.8 inches, is not as high but a bit wider than the 6-by-8-inch active area of the Intuos3 6x8. With the Intuos4, the company has switched from the active-area designations associated with its now-discontinued predecessor to the generic small, medium, large, and extra large.

A rubberlike finish along the bottom half of the pen helps you keep a firm grip. But it also attracts dust and lint, so when not in use, keep the pen in the supplied holder. The tablet has a sleek, all-black appearance, as opposed to the Intuos3's institutional slate-gray styling. The eight programmable ExpressKeys are sensibly grouped to one side, rather than being split between the left- and right-hand sides, as they were on the Intuos3.

The ExpressKey functions, now highlighted with OLED-illuminated labels, are visually impressive. And a new Touch Ring lets you speed-dial through zoom levels and brush sizes. The Intuos4 offers 2,048 pressure levels (up from 1,024 for the Intuos3). Those who prefer a light touch will notice a difference immediately: The pen now reacts to just a single gram of pressure


Company

Wacom Technology Corporation

http://www.wacom.com

Spec Data
  • Price as Tested: $370.00 Street
  • Type: Business, Personal, Enterprise
  • OS Compatibility: Windows Vista, Windows XP, Linux, Mac OS
  • Tech Support: Wacom offers free technical support to our customers, 7AM to 5PM (Pacific), Monday through Thursday, and 8:30AM to 5PM (Pacific) on Friday.
  • Notes: Prices by size: small ($230), medium ($370), large ($500), and extra large ($790)

Your Computing Life, on a USB Thumb Drive

Your Life, on a USB Thumb Drive

You can put an entire bootable operating system on a USB flash drive or customize your own collection of apps to run on any PC, anywhere. Here's how.

Why carry a bulky netbook or an oversize smartphone when you can have all the comforts of your own desktop—on any PC you encounter? That's the joy of carrying everything computable on a USB thumb drive. You can put an entire bootable operating system on these tiny flash-memory devices, or just carry around a few key files. The glorious in-between is using portable applications—software that runs off a USB drive, full installation on a PC not required.

If this concept sounds familiar to Macintosh users, it should. Since the dawn of System 1.0, Mac operating systems have had self-contained software. In Windows, installing a program, especially something as complicated as an office suite, typically involves stray files that reside in several areas of a hard drive. A DLL here, a swap file there, and of course, entries to the Windows Registry. It's what makes uninstalling many Windows programs particularly difficult. Hear me, Windows! Portable apps are what programs should always have been: self-contained and easy to get rid of. Even if one does write stray files to your hard drive, the rule is that the app should remove those files when you close it and disconnect the drive—provided you disconnect properly, of course.

Remember that you have to use Hi-Speed USB 2.0—not only on the drive, but also on the port. That 480-megabit-per-second (Mbps) speed is essential. This shouldn't be much of an issue, but it could crop up if you've got some ancient USB hub laying about with USB 1.1 ports. The tenfold speed increase coming with SuperSpeed USB 3.0 this year is only going to make portable apps all the more worthwhile.

Portable apps aren't limited just to USB thumb drives, either. Some can work on other types of flash memory, such as SD cards, or on other USB mass storage devices—even a media player like the iPod (though not the iPod touch or the iPhone). All that matters most of the time is that Windows sees the gadget as a USB Mass Storage device.


SYSTEMS


Newtech Starter

Asus / Gigabyte Motherboard with 3D Sound,3D Video, AND LAN on Board.
1GB DDRII 800 Kingston RAM.
160GB ATA 7200 RPM HDD.
20X DVD Writer Dual Layer

ATX Midi Case
No Monitor

AM2 X2 5000+:
$365
AM2 X2 5200+:
$390
AM2 X2 6000+:
$430


Intel Dual Core E5200:
$390
Intel Core2Duo E7400:
$470
Intel Core2Duo E8400:
$550


Newtech AM2 Dual Core Station

Gigabyte GA-M61PME-S2
1GB DDRII 800 Kingston RAM.
160GB SATA2 HDD
256MB(512MB TUBO) 2400 Pro
Internal Multi Card Reader
3D Sound, 10/100/1000 Lan,
DVD Writer Dual Layer
Gigabyte Intelli OPTICAL Mouse
Gigabyte Multimedia Keyboard
Midi ATX Case
19" AOC
LCD Monitor

AMD AM2 DualCore X2 5000+:
$670
AMD AM2 DualCore X2 5200+:
$685
AMD AM2 DualCore X2 6000+:
$715


Newtech Intel Dual Core Station

ASUS P5SD2-VM Motherboard
1GB DDRII 800 Kingston RAM.
160GB SATA2 HDD
256MB(512MB TUBO) 2400 Pro
Internal Multi Card Reader
3D Sound, 10/100/1000 Lan,
DVD Writer Dual Layer
Gigabyte Intelli OPTICAL Mouse
Gigabyte Multimedia Keyboard
Midi ATX Case
19" AOC
LCD Monitor

Intel Dual Core E2200:
$690
Intel Dual Core E5200:
$700
Intel Core 2 Duo E7400:
$770
Intel Core 2 Duo E8400
$800


AM2 Athlon64 Professional System

ASUS M2N68-AM M/B
1GB DDRII 800 Kingston RAM.
320GB SATA2 HDD
256MB(512MB TUBO) 2400 Pro

3D AC97 Sound,

10/100/1000 Lan on Board
Internal Multi Card Reader
Dual Layer DVD Writer
1000W Subwoofer System
MS Intelli OPTICAL Mouse
MS Multimedia Keyboard
ATX Midi Case
20" ASUS VW220TR LCD Monitor

AMD AM2 DualCore X2 5000+:
$750
AMD AM2 DualCore X2 5200+:
$765
AMD AM2 DualCore X2 6000+:
$800


INTEL Dual Core Professional System

ASUS P5N73-AM M/B
1GB DDRII 800 Kingston RAM.
320GB SATA2 HDD
256MB(512MB TUBO) 2400 Pro

3D AC97 Soun & 10/100/1000 Lan on Board
Internal Multi Card Reader
Dual Layer DVD Writer
1000W Subwoofer System
GigaByte OPTICAL IntelliMouse
GigaByte Multimedia Keyboard
ATX Midi Case
20" ASUS VW220TR LCD Monitor

Intel DualCore E5200:
$795
Intel Core2Duo E7400:
$865
Intel Core2Duo E8400:
$940
Intel QaurdCore Q6600
$975


AM2 Dual Core Super System

ASUS M2N68-AM( for QuadCore Phenom9550 M/B) Motherboard
2GB DDRII 800 Kingston RAM
320GB SATA2 HDD
512MB nVIDIA-9500GT PCI Express

3D AC97 Sound, 10/100/1000 Lan
Internal Multi Card Reader
Pioneer DVD Writer
1000W Subwoofer System
Microsoft Optical IntelliMouse
Microsoft Multimedia Keyboard
ATX Midi Case
22" Philips 220SW9FB LCD Monitor

AMD2 Athlon X2 5000+:
$910
AMD2 Athlon X2 5200+:
$925
AMD2 Athlon X2 6000+:
$960
AMD2 Phenom 9550 :
$1035

Intel Core 2 Duo Super System

ASUS P5KPL-1600 M/B
2GB DDRII 800 Kingston RAM
320GB SATA2 HDD
512MB nVIDIA-9500GT Video Card

3D AC97 Sound, 10/100 Lan
Internal Multi Card Reader
20 x DVD Pioneer Writer
1000W Subwoofer System
Microsoft OPTICAL IntelliMouse
Microsoft Multimedia Keyboard
ATX Midi Case
22" Philips 220SW9FB LCD Monitor

Intel Core 2 Duo E7400:
$1045
Intel Core 2 Duo E8500:
$1120
Intel Quad Core Q8200:
$1140
Intel Quad Core Q6600:
$1155


AMD2+ Athlon64 Game System

GA-MA770-S3P
2GB DDRII 800 Kingston RAM
500GB SATA2 HDD
1GB nVIDIA-9500GT Video Card

3D AC97 Sound, 10/100 Lan
Internal Multi Card Reade

20 x DVD Pioneer Writer
1000W Subwoofer System
Microsoft OPTICAL IntelliMouse
Microsoft Multimedia Keyboard
ATX
CoolerMaster Midi Case
22" Philips 220SW9FB LCD Monitor

AM2 DualCore X2 5200+:
$1015
AM2 DualCore X2 5200+:
$1030
AM2 DualCore X2 6000+:
$1065
AM2 4xCore Phenom 9550
$1140


Intel Core 2 Duo Game System

GA-EP45-DS4P Motherboard
2GB DDRII 800 Kingston RAM
500GB SATA 16MB HDD
1GB nVIDIA-9500GT Video Card

3D AC97 Sound, 10/100 Lan
Internal Multi Card Reader

20 x DVD Pioneer Writer
1000W Subwoofer System
Microsoft OPTICAL IntelliMouse
Microsoft Multimedia Keyboard
ATX
CoolerMaster Midi Case
22" ViewSonic 2ms LCD Monitor

Intel Core 2 Duo E7400:
$1210
Intel Core 2 Duo E8400:
$1290
Intel Quad Core Q6600:
$1325
Intel Quad Core Q8300:
$1355
Intel Quad Core Q9400:
$1400

Computer virus

A computer virus is a computer program that can copy itself and infect a computer without the permission or knowledge of the owner. The term "virus" is also commonly but erroneously used to refer to other types of malware, adware and spyware programs that do not have the reproductive ability. A true virus can only spread from one computer to another (in some form of executable code) when its host is taken to the target computer; for instance because a user sent it over a network or the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive. Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by another computer.[1][2]

The term "computer virus" is sometimes used as a catch-all phrase to include all types of malware. Malware includes computer viruses, worms, trojan horses, most rootkits, spyware, dishonest adware, crimeware and other malicious and unwanted software), including true viruses.

Viruses are sometimes confused with computer worms and Trojan horses, which are technically different. A worm can use security vulnerabilities to spread itself to other computers without needing to be transferred as part of a host, and a Trojan horse is a program that appears harmless but has a hidden agenda. Worms and Trojans, like viruses, may cause harm to either a computer system's hosted data, functional performance, or networking throughput, when they are executed. Some viruses and other malware have symptoms noticeable to the computer user, but most are surreptitious. This makes it hard for the average user to notice, find and disable and is why specialist anti-virus programs are now commonplace.

Most personal computers are now connected to the Internet and to local area networks, facilitating the spread of malicious code. Today's viruses may also take advantage of network services such as the World Wide Web, e-mail, Instant Messaging and file sharing systems to spread, blurring the line between viruses and worms. Furthermore, some sources use an alternative terminology in which a virus is any form of self-replicating malware.

Symptoms of a computer virus

If you suspect or confirm that your computer is infected with a computer virus, obtain the current antivirus software. The following are some primary indicators that a computer may be infected:
  • The computer runs slower than usual.
  • The computer stops responding, or it locks up frequently.
  • The computer crashes, and then it restarts every few minutes.
  • The computer restarts on its own. Additionally, the computer does not run as usual.
  • Applications on the computer do not work correctly.
  • Disks or disk drives are inaccessible.
  • You cannot print items correctly.
  • You see unusual error messages.
  • You see distorted menus and dialog boxes.
  • There is a double extension on an attachment that you recently opened, such as a .jpg, .vbs, .gif, or .exe. extension.
  • An antivirus program is disabled for no reason. Additionally, the antivirus program cannot be restarted.
  • An antivirus program cannot be installed on the computer, or the antivirus program will not run.
  • New icons appear on the desktop that you did not put there, or the icons are not associated with any recently installed programs.
  • Strange sounds or music plays from the speakers unexpectedly.
  • A program disappears from the computer even though you did not intentionally remove the program.
Note These are common signs of infection. However, these signs may also be caused by hardware or software problems that have nothing to do with a computer virus. Unless you run the Microsoft Malicious Software Removal Tool, and then you install industry-standard, up-to-date antivirus software on your computer, you cannot be certain whether a computer is infected with a computer virus or not.

Symptoms of worms and trojan horse viruses in e-mail messages

When a computer virus infects e-mail messages or infects other files on a computer, you may notice the following symptoms:
  • The infected file may make copies of itself. This behavior may use up all the free space on the hard disk.
  • A copy of the infected file may be sent to all the addresses in an e-mail address list.
  • The computer virus may reformat the hard disk. This behavior will delete files and programs.
  • The computer virus may install hidden programs, such as pirated software. This pirated software may then be distributed and sold from the computer.
  • The computer virus may reduce security. This could enable intruders to remotely access the computer or the network.
  • You receive an e-mail message that has a strange attachment. When you open the attachment, dialog boxes appear, or a sudden degradation in system performance occurs.
  • Someone tells you that they have recently received e-mail messages from you that contained attached files that you did not send. The files that are attached to the e-mail messages have extensions such as .exe, .bat, .scr, and .vbs extensions.

Symptoms that may be the result of ordinary Windows functions

A computer virus infection may cause the following problems:
  • Windows does not start even though you have not made any system changes or even though you have not installed or removed any programs.
  • There is frequent modem activity. If you have an external modem, you may notice the lights blinking frequently when the modem is not being used. You may be unknowingly supplying pirated software.
  • Windows does not start because certain important system files are missing. Additionally, you receive an error message that lists the missing files.
  • The computer sometimes starts as expected. However, at other times, the computer stops responding before the desktop icons and the taskbar appear.
  • The computer runs very slowly. Additionally, the computer takes longer than expected to start.
  • You receive out-of-memory error messages even though the computer has sufficient RAM.
  • New programs are installed incorrectly.
  • Windows spontaneously restarts unexpectedly.
  • Programs that used to run stop responding frequently. Even if you remove and reinstall the programs, the issue continues to occur.
  • A disk utility such as Scandisk reports multiple serious disk errors.
  • A partition disappears.
  • The computer always stops responding when you try to use Microsoft Office products.
  • You cannot start Windows Task Manager.
  • Antivirus software indicates that a computer virus is present.
Note These problems may also occur because of ordinary Windows functions or problems in Windows that are not caused by a computer virus.

How to remove a computer virus

Even for an expert, removing a computer virus can be a difficult task without the help of computer virus removal tools. Some computer viruses and other unwanted software, such as spyware, even reinstall themselves after the viruses have been detected and removed. Fortunately, by updating the computer and by using antivirus tools, you can help permanently remove unwanted software.

To remove a computer virus, follow these steps:
  1. Install the latest updates from Microsoft Update on the computer.
  2. Update the antivirus software on the computer. Then, perform a thorough scan of the computer by using the antivirus software.
  3. Download, install, and then run the Microsoft Malicious Software Removal Tool to remove existing viruses on the computer.

How to protect your computer against viruses

To protect your computer against viruses, follow these steps:
  1. On the computer, turn on the firewall.
  2. Keep the computer operating system up-to-date.
  3. Use updated antivirus software on the computer.
  4. Use updated antispyware software on the computer.

Computer security

Computer security is a branch of technology known as information security as applied to computers. The objective of computer security varies and can include protection of information from theft or corruption, or the preservation of availability, as defined in the security policy.

Computer security imposes requirements on computers that are different from most system requirements because they often take the form of constraints on what computers are not supposed to do. This makes computer security particularly challenging because it is hard enough just to make computer programs do everything they are designed to do correctly. Furthermore, negative requirements are deceptively complicated to satisfy and require exhaustive testing to verify, which is impractical for most computer programs. Computer security provides a technical strategy to convert negative requirements to positive enforceable rules. For this reason, computer security is often more technical and mathematical than some computer science fields.

Typical approaches to improving computer security (in approximate order of strength) can include the following:

  • Physically limit access to computers to only those who will not compromise security.
  • Hardware mechanisms that impose rules on computer programs, thus avoiding depending on computer programs for computer security.
  • Operating system mechanisms that impose rules on programs to avoid trusting computer programs.
  • Programming strategies to make computer programs dependable and resist subversion.

Hardware mechanisms that protect computers and data

Hardware based or assisted computer security offers an alternative to software-only computer security. Devices such as dongles may be considered more secure to the physical access required in order to be compromised.

While many software based security solutions encrypt the data to prevent data from being stolen, a malicious program may corrupt the data in order to make it unrecoverable or unusable. Hardware-based security solutions can prevent read and write access to data and hence offers very strong protection against tampering.

Secure operating systems

One use of the term computer security refers to technology to implement a secure operating system. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the technology is in limited use today, primarily because it imposes some changes to system management and also because it is not widely understood. Such ultra-strong secure operating systems are based on operating system kernel technology that can guarantee that certain security policies are absolutely enforced in an operating environment. An example of such a Computer security policy is the Bell-LaPadula model. The strategy is based on a coupling of special microprocessor hardware features, often involving the memory management unit, to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system which, if certain critical parts are designed and implemented correctly, can ensure the absolute impossibility of penetration by hostile elements. This capability is enabled because the configuration not only imposes a security policy, but in theory completely protects itself from corruption. Ordinary operating systems, on the other hand, lack the features that assure this maximal level of security. The design methodology to produce such secure systems is precise, deterministic and logical.

Systems designed with such methodology represent the state of the art of computer security although products using such security are not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information, military secrets, and the data of international financial institutions. These are very powerful security tools and very few secure operating systems have been certified at the highest level ( to operate over the range of "Top Secret" to "unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria quantifies security strength of products in terms of two components, security functionality and assurance level (such as EAL levels), and these are specified in a Protection Profile for requirements and a Security Target for product descriptions. None of these ultra-high assurance secure general purpose operating systems have been produced for decades or certified under the Common Criteria.

In USA parlance, the term High Assurance usually suggests the system has the right security functions that are implemented robustly enough to protect DoD and DoE classified information. Medium assurance suggests it can protect less valuable information, such as income tax information. Secure operating systems designed to meet medium robustness levels of security functionality and assurance have seen wider use within both government and commercial markets. Medium robust systems may provide the same security functions as high assurance secure operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or EAL5). Lower levels mean we can be less certain that the security functions are implemented flawlessly, and therefore less dependable. These systems are found in use on web servers, guards, database servers, and management hosts and are used not only to protect the data stored on these systems but also to provide a high level of protection for network connections and routing services.

Security architecture


Security Architecture can be defined as the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes, among them confidentiality, integrity, availability, accountability and assurance."In simpler words, a security architecture is the plan that shows where security measures need to be placed. If the plan describes a specific solution then, prior to building such a plan, one would make a risk analysis. If the plan describes a generic high level design (reference architecture) then the plan should be based on a threat analysis.

Security by design


The technologies of computer security are based on logic. There is no universal standard notion of what secure behavior is. "Security" is a concept that is unique to each situation. Security is extraneous to the function of a computer application, rather than ancillary to it, thus security necessarily imposes restrictions on the application's behavior.

There are several approaches to security in computing, sometimes a combination of approaches is valid:

  1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity).
  2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example).
  3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity).
  4. Trust no software but enforce a security policy with trustworthy mechanisms.

Many systems have unintentionally resulted in the first possibility. Since approach two is expensive and non-deterministic, its use is very limited. Approaches one and three lead to failure. Because approach number four is often based on hardware mechanisms and avoids abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two and thick layers of four.

There are myriad strategies and techniques used to design security systems. There are few, if any, effective strategies to enhance security after design.

One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest.

Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessable to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure.

The design should use "defense in depth", where more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Defense in depth works when the breaching of one security measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism.

Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.

In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.

Early history of security by design

The early Multics operating system was notable for its early emphasis on computer security by design, and Multics was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this, Multics' security was broken, not once, but repeatedly. The strategy was known as 'penetrate and test' and has become widely known as a non-terminating process that fails to produce computer security. This led to further work on computer security that prefigured modern security engineering techniques producing closed form processes that terminate.

Secure coding


If the operating environment is not based on a secure operating system capable of maintaining a domain for its own execution, and capable of protecting application code from malicious subversion, and capable of protecting the system from subverted code, then high degrees of security are understandably not possible. While such secure operating systems are possible and have been implemented, most commercial systems fall in a 'low security' category because they rely on features not supported by secure operating systems (like portability, et al.). In low security operating environments, applications must be relied on to participate in their own protection. There are 'best effort' secure coding practices that can be followed to make an application more resistant to malicious subversion.

In commercial environments, the majority of software subversion vulnerabilities result from a few known kinds of coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow, and code/command injection.

Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C and C++"). Other languages, such as Java, are more resistant to some of these defects, but are still prone to code/command injection and other software defects which facilitate subversion.

Recently another bad coding practice has come under scrutiny; dangling pointers. The first known exploit for this particular problem was presented in July 2007. Before this publication the problem was known but considered to be academic and not practically exploitable.

In summary, 'secure coding' can provide significant payback in low security operating environments, and therefore worth the effort. Still there is no known way to provide a reliable degree of subversion resistance with any degree or combination of 'secure coding.'

Capabilities vs. ACLs

Within computer systems, the two fundamental means of enforcing privilege separation are access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be insecure in many situations (e.g., Confused deputy problem). It has also been shown that ACL's promise of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.

Unfortunately, for various historical reasons, capabilities have been mostly restricted to research operating systems and commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language.

First the Plessey System 250 and then Cambridge CAP computer demonstrated the use of capabilities, both in hardware and software, in the 1970s, so this technology is hardly new. A reason for the lack of adoption of capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive redesign of the operating system and hardware.

The most secure computers are those not connected to the Internet and shielded from any interference. In the real world, the most security comes from operating systems where security is not an add-on, such as OS/400 from IBM. This almost never shows up in lists of vulnerabilities for good reason. Years may elapse between one problem needing remediation and the next.

A good example of a secure system is EROS. But see also the article on secure operating systems. TrustedBSD is an example of an open source project with a goal, among other things, of building capability functionality into the FreeBSD operating system. Much of the work is already done.

Applications

Computer security is critical in almost any technology-driven industry which operates on computer systems. The issues of computer based systems and addressing their countless vulnerabilities are an integral part of maintaining an operational industry.

In aviation

The aviation industry is especially important when analyzing computer security because the involved risks include expensive equipment and cargo, transportation infrastructure, and human life. Security can be compromised by hardware and software malpractice, human error, and faulty operating environments. Threats that exploit computer vulnerabilities can stem from sabotage, espionage, industrial competition, terrorist attack, mechanical malfunction, and human error.

The consequences of a successful deliberate or inadvertent misuse of a computer system in the aviation industry range from loss of confidentiality to loss of system integrity, which may lead to more serious concerns such as data theft or loss, network and air traffic control outages, which in turn can lead to airport closures, loss of aircraft, loss of passenger life. Military systems that control munitions can pose an even greater risk.

A proper attack does not need to be very high tech or well funded for a power outage at an airport alone can cause repercussions worldwide. One of the easiest and, arguably, the most difficult to trace security vulnerabilities is achievable by transmitting unauthorized communications over specific radio frequencies. These transmissions may spoof air traffic controllers or simply disrupt communications altogether. These incidents are very common, having altered flight courses of commercial aircraft and caused panic and confusion in the past. Controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. Beyond the radar's sight controllers must rely on periodic radio communications with a third party.

Lightning, power fluctuations, surges, brown-outs, blown fuses, and various other power outages instantly disable all computer systems, since they are dependent on electrical source. Other accidental and intentional faults have caused significant disruption of safety critical systems throughout the last few decades and dependence on reliable communication and electrical power only jeopardizes computer safety.

[edit] Notable system accidents

In 1994, over a hundred intrusions were made by unidentified hackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horse viruses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user. Now, a technique called Ethical hack testing is used to remediate these issues.

Electromagnetic interference is another threat to computer safety and in 1989, a United States Air Force F-16 jet accidentally dropped a 230 kg bomb in West Georgia after unspecified interference caused the jet's computers to release it.

A similar telecommunications accident also happened in 1994, when two UH-60 Blackhawk helicopters were destroyed by F-15 aircraft in Iraq because the IFF system's encryption system malfunctioned.[citation needed]

Terminology

The following terms used in engineering secure systems are explained below.

  • A bigger OS, capable of providing a standard API like POSIX, can be built on a secure microkernel using small API servers running as normal programs. If one of these API servers has a bug, the kernel and the other servers are not affected: e.g. Hurd or Minix 3.
  • authentication techniques can be used to ensure that communication end-points are who they say they are.
  • Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
  • Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. The next sections discuss their use.
  • Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
  • Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
  • Firewalls can either be hardware devices or software programs. They provide some protection from online intrusion, but since they allow some applications (e.g. web browsers) to connect to the Internet, they don't protect against some unpatched vulnerabilities in these applications (e.g. lists of known unpatched holes from Secunia and SecurityFocus).
  • Mandatory access control can be used to ensure that privileged access is withdrawn when privileges are revoked. For example, deleting a user account should also stop any processes that are running with that user's privileges.
  • Secure cryptoprocessors can be used to leverage physical security techniques into protecting the security of the computer system.
  • Thus simple microkernels can be written so that we can be sure they don't contain any bugs: eg EROS and Coyotos.

Some of the following items may belong to the computer insecurity article:

  • Access authorization restricts access to a computer to group of users through the use of authentication systems. These systems can protect either the whole computer - such as through an interactive logon screen - or individual services, such as an FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, and, more recently, smart cards and biometric systems.
  • Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses and other malicious software (malware).
  • application with known security flaws should not be run. Either leave it turned off until it can be patched or otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry used by worms to automatically break into a system and then spread to other systems connected to it. The security website Secunia provides a search tool for unpatched known flaws in popular products.

Cryptographic techniques involve transforming information, scrambling it so it becomes unreadable during transmission. The intended recipient can unscramble the message, but eavesdroppers cannot.
  • Backups are a way of securing information; they are another copy of all the important computer files kept in another location. These files are kept on hard disks, CD-Rs, CD-RWs, and tapes. Suggested locations for backups are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over the Internet for both business and individuals.
    • Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes, or tornadoes, may strike the building where the computer is located. The building can be on fire, or an explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of disaster. Further, it is recommended that the alternate location be placed where the same disaster would not affect both locations. Examples of alternate disaster recovery sites being compromised by the same disaster that affected the primary site include having had a primary site in World Trade Center I and the recovery site in 7 World Trade Center, both of which were destroyed in the 9/11 attack, and having one's primary site and recovery site in the same coastal region, which leads to both being vulnerable to hurricane damage (e.g. primary site in New Orleans and recovery site in Jefferson Parish, both of which were hit by Hurricane Katrina in 2005). The backup media should be moved between the geographic sites in a secure manner, in order to prevent them from being stolen.
  • Encryption is used to protect the message from the eyes of others. It can be done in several ways by switching the characters around, replacing characters with others, and even removing characters from the message. These have to be used in combination to make the encryption secure enough, that is to say, sufficiently difficult to crack. Public key encryption is a refined and practical way of doing encryption. It allows for example anyone to write a message for a list of recipients, and only those recipients will be able to read that message.
  • Firewalls are systems which help protect computers and computer networks from attack and subsequent intrusion by restricting the network traffic which can pass through them, based on a set of system administrator defined rules.
  • Honey pots are computers that are either intentionally or unintentionally left vulnerable to attack by crackers. They can be used to catch crackers or fix vulnerabilities.
  • Intrusion-detection systems can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network.
  • Pinging The ping application can be used by potential crackers to find if an IP address is reachable. If a cracker finds a computer they can try a port scan to detect and attack services on that computer.
  • Social engineering awareness keeps employees aware of the dangers of social engineering and/or having a policy in place to prevent social engineering can reduce successful breaches of the network and servers.