Virtual Servers Cut Hosting Costs And Offer Flexibility-polartec

The virtual computer is an old dream, a long story associated with the evolution of the computer. The goal has always been to make the software independent from the hardware, enabling the hardware to evolve without having to rewrite the software. We all know what it is to suffer from service migrations, server reinstallations and hardware renewals, which generate inconvenient interruptions to our work. However, virtualization has always faced performance problems: the need for better performance in terms of speed and development time has always been opposed by the need for portability and investment preservation. Historical perspective Between 1977 and 1979 Unix tried to address the issue of virtualization with the C language: "Write it in C, it will run on any Unix." But just a few years later it was shown that the C language did not solve all of the problems. In the 1980s the IBM 390 system had specialized circuits in the CPU that enabled it to virtualize itself. The virtual machine idea was there: IBM’s Virtual Machine Operating System provided each user with his or her own independent virtual computer. In 1995 the Web generated another virtual machine: the Java Virtual Machine. The idea was to have a virtual machine and an environment that was independent of the underlying operating system. This time the slogan was: "Write once, run everywhere." Again all problems were not solved by Java, but this time it may have been due to commercial interests rather than to technical difficulties. So what’s new? Today virtual technology has become standardized: the Intel PC is now a consolidated standard. The Intel Virtual PC has become the new virtual machine base layer. It can run any operating system supported by the Intel PC, from Windows to Linux, and possibly Mac OS. It can run on any PC processor vendor (Intel or AMD) and on any processor model (Itanium, Pentium 4, Opteron and so on). Fig. 2 Today’s emulation overhead is small and the emulation of PC on PC is very efficient. Also, Intel and AMD processors now integrate a dedicated virtualization technology layer to enhance the performance of emulated machines. A case for virtual servers CERN’s NICE custom servers are a good candidate for virtualization. At the moment the IT/IS group provides a custom Windows server hosting service (in the NICE environment) for groups and departments that require an "out of the box" Windows server solution (see .cern.ch/Win/Help/?kbid=251010). These custom Windows servers are provided with a high-availability service level agreement: servers are hosted in the CERN Computer Centre, connected to uninterruptible power supplies and monitored around the clock. Daily back-ups, hardware and operating system maintenance, security scans and patches are all transparent to the customer. With this kind of solution the customer is free to focus on the applications, rather than on the server that is hosting them. Most of the time customers are unwilling to share the server with others and are ready to pay a kind of rental. This service is popular. IT/IS receives several requests each month from sources such as LHC controls and experiments (like ALICE), technical services and video streaming services. Today more than 60 servers run the custom Windows server hosting service, and some weaknesses have been found that could be solved using virtualization. Installing and maintaining physical servers is time consuming, and it requires a management overhead for logistics and resource planning. The space in the CERN Computer Centre is also a scarce resource that cannot be extended infinitely. IT/IS group has also noticed that some of the servers are underutilized, with only about 23% of the CPU in use. Managing hardware Virtualization creates a clear separation between the management of hardware and server (software). This management could even be performed by different teams. The tasks required to manage hardware include basic maintenance, but mostly involve supporting a large pool of servers. The management team has to ensure that enough server hardware is available to satisfy the global demand for CPU and storage. The team allocates server images to machines in the pool, manages server configuration, and can consider possible optimizations and automations (for example, an automatic reallocation to different hardware according to past performance requirements). Advantages of virtual servers Website gives access Managing virtual servers is much simpler: "installing a server" becomes "loading a virtual machine image". And unprecedented automation can be achieved. The IT/IS group has started automating the Microsoft Virtual Server 2005 software by providing a Web interface for users of its custom server service (see .cern.ch/WinServices/Services/WoD). The user can request a brand new server in just several clicks: Select an OS, Duration, Budget code and Usage, then click Request (figure 1). Ten minutes later the virtual server is available and ready for use. The user will automatically be added to the local administrator’s group for Windows servers or to the sudo users list for Linux servers. A budget code is required to make sure that users don’t abuse the virtual server facility. Although a new server is just a few clicks away, .puter resources at CERN are not free. Once the server is installed and ready the user can interact with it without leaving the office: the Reset and TurnOn/Off buttons are links on a website (figure 2). The user can export and import the disk image through this interface: it can be stored for use on the user’s server after it has been configured. A Web interface provides access to the server console, enabling BIOS settings to be changed and various configurations to be made. The server’s physical configuration can also be edited using a website that provides access to a network card, CD/DVD reader, floppy disks and so on (figure 3). Virtual servers on demand The virtualization at CERN has been automated to fit the organization’s generic needs: various operating systems can be requested, like Windows 2003 classic, Windows 2003 with IIS (Web server), Linux SLC3 and SLC4. An empty image can also be requested: a virtual server with no operating system will be provided and the customer can install manually his or her own operating system. This can be done in two ways: The user can mount a CD/DVD image and boot from it to run the installer, or press F12 to boot from the network (PXE) and start various network installations. Taking advantage of disk images used in virtualization, the user can import a disk image to this virtual server guest. This disk image can be generated from a physical .puter (any software that is .pliant with VHD file specification can achieve this, like WinImage shareware), or .e from a previous virtual server on which the user exported the disk image using the provided Web interface. Table 1 When a customer requests a new server the resource is taken from the pool of available hardware: multiple and different operating systems can be hosted in the same box. The server is available in 10 minutes (the time it takes to copy the image then join the CERN domain for Windows OS, or install updates for Linux OS). Sharing hardware leads of course to an important cost reduction: the IT/IS platform now hosts three virtual servers (guests) for each physical machine (host). Performance Users of the virtual servers have not noticed any difference in service. Of course an application that is input/output and CPU intensive could suffer from virtualization, but classic applications do not. Today’s service emulates servers with one CPU in 32 bit mode, running at the same speed as the host (2.8 GHz or more). A CPU can be preallocated one guest can reserve 100% of one CPU. By default the three guests share the host’s two CPUs according to their needs. The memory allocated is 2 GB per guest. Concerning input/output, the emulated disk drives can be IDE or SCSI, and can be dynamic or static. A dynamic drive will use only what it needs on the physical disk, .pared to the static disk, which will use the declared emulated size. Table 1 shows the read/write speed of dynamic and static drives .pared to their physical host. The idea, however, is to use centralized file space, like DFS or AFS, to avoid storing important data on virtual server guests. This helps to replace a guest server more quickly, and enables data to be accessed easily from multiple guests. Concerning the network, emulated cards have their own generated MAC address and share the physical host Gigabyte network card. Servers on demand The IT/IS group anticipates that it will provide more server types, with various .binations of operating systems and applications. Requests for custom server types are also expected, where users will import (and export) their own server images. A physical server can be virtualized by simply building an image of its hard drive(s). A "server on demand" service must be able to satisfy the following requests: "I need 20 servers with this image for one month." "I need an image for this server replicated 10 times." "I need more CPU/memory for my server." "I need a test environment, OS version n+1, to which I can migrate my current production services." Batch systems We can also imagine the future of batch systems. Instead of sending a piece of code to a pool of batch servers, the batch creator could virtualize his or her own machine (desktop, development environment, etc.), and send this virtualized image to a pool of virtual servers. The user would then run the batch in his or her preferred mode. Running a virtualized machine could also prevent batch scripts or programs from creating problems on the batch server, because all processes are restricted to their own virtual machine. 相关的主题文章: