What Windows tool would a technician use to virtualize two operating systems

Installing and Configuring GFI Network Server Monitor

Brien Posey, in GFI Network Security and PCI Compliance Power Tools, 2009

Solutions Fast Track

Hardware and Software Requirements

What Windows tool would a technician use to virtualize two operating systems

No hardware requirements are stated.

What Windows tool would a technician use to virtualize two operating systems

The supported operating systems include Windows 2000 (SP4 or later), Windows XP Professional, or Windows Server 2003.

What Windows tool would a technician use to virtualize two operating systems

Windows Scripting Host 5.5 or later is required. This is included with Internet Explorer 5.5 and later.

What Windows tool would a technician use to virtualize two operating systems

Version 1.1 of the .NET Framework is also required.

What Windows tool would a technician use to virtualize two operating systems

A wide variety of Windows and Linux operating systems are supported on computers that are being monitored.

Installing GFI Network Server Monitor

What Windows tool would a technician use to virtualize two operating systems

You can use either a SQL Server database, Microsoft Access, or an MSDE database, but a SQL Server database is preferred in larger organizations.

What Windows tool would a technician use to virtualize two operating systems

You can configure GFI Network Server Monitor to use either a Microsoft Exchange Server or a stand-alone SMTP server for e-mail alerts. If you have an Exchange Server in your organization, I recommend using it.

Performing the Initial Configuration

What Windows tool would a technician use to virtualize two operating systems

Target computers with similar configurations should be grouped into folders.

What Windows tool would a technician use to virtualize two operating systems

Computers within a folder inherit the settings from the parent folder.

Creating Separate Folders

What Windows tool would a technician use to virtualize two operating systems

Folders let you group together servers with similar monitoring needs.

What Windows tool would a technician use to virtualize two operating systems

Unless you specify otherwise, servers within a folder inherit folder level settings.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492850000170

Installing Sniffer Pro

Robert J. Shimonski, ... Yuri Gordienko, in Sniffer Pro Network Optimization and Troubleshooting Handbook, 2002

Troubleshooting the Installation

What Windows tool would a technician use to virtualize two operating systems

Sniffer Pro should be installed on only one of the supported operating systems. You should check to make sure that the system meets all the minimum requirements.

What Windows tool would a technician use to virtualize two operating systems

Before installing a new version of Sniffer Pro, any older version must be completely uninstalled. All .INI files and registry settings associated with the older version of the software must be removed manually before installing the new version.

What Windows tool would a technician use to virtualize two operating systems

A technician's tool kit can be very useful for troubleshooting problems. This toolkit can consist of straight-through and cross-over cables, a mini-hub, an RJ-45 crimper, a punch-down tool, some screwdrivers, and a toner.

What Windows tool would a technician use to virtualize two operating systems

Hubs can come in handy when you want to use Sniffer Pro to capture traffic between a host and other parts of the network.

What Windows tool would a technician use to virtualize two operating systems

Along with your technician's tool kit, network diagrams, and documentation, the Sniffer Pro quick reference guide can also be useful. Always keep these items handy.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978193183657950006X

How Virtualization Happens

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Full Virtualization

Full virtualization is a virtualization technique used to provide a VME that completely simulates the underlying hardware. In this type of environment, any software capable of execution on the physical hardware can be run in the VM, and any OS supported by the underlying hardware can be run in each individual VM. Users can run multiple different guest OSes simultaneously. In full virtualization, the VM simulates enough hardware to allow an unmodified guest OS to be run in isolation. This is particularly helpful in a number of situations. For example, in OS development, experimental new code can be run at the same time as older versions, each in a separate VM. The hypervisor provides each VM with all the services of the physical system, including a virtual BIOS, virtual devices, and virtualized memory management. The guest OS is fully disengaged from the underlying hardware by the virtualization layer.

Full virtualization is achieved by using a combination of binary translation and direct execution. With full virtualization hypervisors, the physical CPU executes nonsensitive instructions at native speed; OS instructions are translated on the fly and cached for future use, and user level instructions run unmodified at native speed. Full virtualization offers the best isolation and security for VMs and simplifies migration and portability as the same guest OS instance can run on virtualized or native hardware. Figure 1.5 shows the concept behind full virtualization.

What Windows tool would a technician use to virtualize two operating systems

Figure 1.5. Full Virtualization Concepts

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000011

Exploitation

James Broad, Andrew Bindner, in Hacking with Kali, 2014

Postexploitation Modules

These modules are handy to have on hand and can automate many of the normal functions necessary to begin facilitating sustained access such as collecting passwords, PKI certificates, dropping keyloggers, and eavesdropping over a possibly attached microphone. On the left-hand side, the supported operating systems are listed per module. Click on the module’s hyper-link on the right-hand side to active the module through the session.

As an example, navigate to “Multi Gather OpenSSH PKI Credentials Collection” and click on the hyper-link located on the right-hand side of the web page. Just as before with the exploitation modules, a detailed overview of the module is available and a “Run Module” button at the bottom. See Figure 9.16.

What Windows tool would a technician use to virtualize two operating systems

Figure 9.16. Multi Gather OpenSSH PKI.

Click on the “Run Module” button. See Figure 9.17.

What Windows tool would a technician use to virtualize two operating systems

Figure 9.17. Run Module.

From Figure 9.17, the security tester can observe the copying of SSH PKI credentials. All files downloaded will be stored in /opt/metasploit/apps/pro/loot directory (Figure 9.18).

What Windows tool would a technician use to virtualize two operating systems

Figure 9.18. Loot.

The Metasploitable2 virtual machine is riddled with holes on purpose and should never be used as a base operating system. Take some time to review the skills that were just presented and see how many holes can be found.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124077492000094

Mobile Malware Mitigation Measures

In Mobile Malware Attacks and Defense, 2009

Frequently Asked Questions

Q

Do I really need to worry about security on my mobile phone?

A

Yes. While your security needs vary depending on how much information and access you keep on your phone, even the simplest use requires at least some basic best practices.

Q

Is third-party software really worth the cost and effort?

A

It depends a bit on your use model. Users with very simple usage might be able to get by with best practices and operating system supported functionality. More advanced users, should consider additional security software.

Q

How do I know if my phone has been hacked?

A

This isn't much different than your desktop computer. Alerts from security software, odd behavior, strange entries on your bills, and vulnerability alerts are all good indicators you should look closer.

Q

What's the difference between all these different mobile security products?

A

Some do differ in the functionality they offer. When comparing, consider if they offer anti-malware, firewall protection, encryption, and so on. When choosing between products with similar functionalities, read the reviews and pay attention to performance, user interfaces, and update support.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492980000112

Developing Ethereal

In Ethereal Packet Sniffing, 2004

Prerequisites for Developing Ethereal

The first step in the development process is to acquire the Ethereal source. You can download many different distributions from the Ethereal website, such as the currently released source code or the last nightly backup of the source code. You can also utilize the Concurrent Versions System (CVS) to keep up to date throughout your development process. CVS is the most risky, compared to released versions of Ethereal, because you are compiling code that hasn't been fully tested. Generally, however, the CVS code is of very high quality.

Even if you have an issue with the current CVS code, you can generally get one of the members of the Ethereal mailing list ([email protected]) to make a quick change to resolve the issue. CVS gives you access to code changes as they are checked into the master build. It is the most up–to-date, but can contain unidentified bugs. Please keep in mind that the CVS distribution can be and is routinely updated as well. You might develop with the current released code and then find out that a specific function you are working with has changed. Instructions for utilizing the latest builds and CVS can also be found at the http://www.ethereal.com website.

Before you can add to or modify Ethereal, you must be able to build the application from source. To build from source you will need to acquire additional libraries and tools. Ethereal is a multiplatform application, meaning that it can run on many different operating systems. You will need to be able to build on the particular operating system that you will be developing on.

It is also important to understand that Ethereal is developed and built using a number of different programming languages. This includes many UNIX-based programs and shell scripts. For example, several modules within Ethereal are written in python and Perl. Although it may not be necessary for you to be proficient in each programming language, you might find times where you need to understand enough about the language to make a simple change. A majority of the code base for Ethereal is ANSI-C. The requirement for ANSI-C is due to the portability of the code to multiple operating system platforms. Special care should be taken when writing in C to use only those functions that are defined as ANSI-C and are portable. You should be able to use just about any C compiler with the Ethereal source. This would include GNU C Compiler (gcc) on Linux, as well as Microsoft Visual C++ on Windows.

What Windows tool would a technician use to virtualize two operating systems
Damage & Defense

Portability

Before starting any work you need to read the portability section 1.1.1 of the README.developer document contained in the doc directory of the source distribution. The word portability is used in reference to the steps a developer should take to ensure that Ethereal source can be compiled on all of the supported operating systems. For example, you wouldn't want to use a function that only exists on a win32 platform. This would cause Ethereal source to not compile or build correctly on the other supported operating systems. Typically, when a program is written to one operating system platform and then made to run on a different platform, the process is called porting. This is where the name portability is derived from.

Skills

To build a new dissector or modify the main application, you will need to be able to program in C. However, please keep in mind that modifications to existing dissectors may require you to be knowledgeable in another language.

Modifications to the Ethereal GUI will require some knowledge of GTK. The GTK website at http://www.gtk.org, contains online and downloadable tutorials for programming in GTK.

Contributions to the Ethereal project come from many different levels of developers. Some are novices while others might be considered to be experts. However, the overall Ethereal project is maintained by a group of highly experienced developers. New additions and or contributions are first reviewed by this group and then incorporated into the source distribution following any necessary changes. In some cases, the individual who reviews the changes might make a recommendation to the original developer for a specific change, or in other cases they may make the changes themselves.

Tools/Libraries

In most cases, you will need the developer kit for access to necessary libraries. A developer kit is different from the normal binary distribution. Generally, the developer package includes the compiled binaries for the operating system it was built for. For example, since Ethereal utilizes the GTK libraries for its GUI implementation, you will need to ensure that you have the developer kit for GTK. You will also need to make sure that you download the correct developer kit for the operating system that you are going to develop on. It is important to try to use the latest released version of the developer kit if possible. Although you might be able to build Ethereal with an older set of libraries, the results of the application running might not be as expected. However, in some cases this might not be an option. Some operating systems only support certain versions of support libraries. In general, you can consult the Ethereal developer mailing list or the developer section of the www.ethereal.com website.

Win32 ports of the required libraries are not necessarily located at their respective project site. For example, the win32 port for the libpcap library is called WinPcap. The following web pages list places where you can look for Win32 library ports. The www.ethereal.com/distribution/win32/development web page contains most of what you will need, but if you want to build with GTK 2.x you will need additional library packages not listed on the Ethereal website. Refer to the web pages located at www.gimp.org/∼tml/gimp/win32.for GTK 2.x information and access to the Win32 ports.

When building Ethereal, you will need the GTK and GLIB libraries. Ethereal can be built using the older GTK 1.2, 1.3, or the newer GTK 2.x versions. The newer 2.x versions of GTK add more font control and have a better look and feel. These libraries can be downloaded from www.gtk.org. The installation chapter in this book identifies some of these issues when installing on Solaris and Red Hat distributions.

The console version of Ethereal, called Tethereal, only requires the GLIB libraries. If you will only be building the Tethereal application, you will not need GTK.

If you will be building with packet capture support in Ethereal or Tethereal, you will need to make sure that you have the libpcap libraries from: www.tcpdump.org. Without packet capture support, users of the compiled program will only be able to view packet trace files. They will not be able to perform new packet captures. Win32 developers will need the WinPcap libraries instead of libpcap. These can be downloaded from http://winpcap.polito.it/.

The following is a list of libraries needed to build Ethereal. Remember that you will need to download the developer kit to acquire the necessary libraries for your operating system. Some packages are optional and can be linked to add additional features. UNIX/Linux operating systems will detect the installed libraries by the automake process. Automake will identify the library packages that can be included when you build Ethereal. On Win32 based computers the config.nmake file should be modified to define what libraries that you wish to include in the build process. These libraries will then be added to the final binary during the linker process of the build.

glib Low-level core library for GTK (required).

gettext GNU language conversion (required by glib).

libiconv Character set conversion library (required by glib).

GTK GIMP toolkit for creating graphical user interfaces (required for Ethereal build).

libpcap Packet capture library for UNIX/Linux-based operating systems.

WinPcap Packet capture library for Win32 based operating systems (optional).

ADNS GNU Advanced DNS client library (optional) adds DNS lookup support.

net-snmp Simple Network Management Protocol (SNMP) library (optional) adds SNMP support.

pcre Perl Compatible Regular Expressions library (optional) adds Perl expression filters.

zlib File compression library (optional) adds compressed file support.

If you will be building with GTK version 1.2 or 1.3, no additional libraries are needed for GTK. Otherwise, when building with GTK 2.x you will need the following additional libraries:

atk Accessibility toolkit (required).

pango Internalization of text (required).

Windows users must choose to either attempt to build from within cygwin using gcc or with a Win32-based compiler such as Microsoft's Visual C++ (MSVC++). They will also need to download a number of additional libraries. The default location specified in the Ethereal distribution for the libraries on Win32 is C:\ethereal-win32-libs. You should download and extract each required library to this location. Ethereal's scripts will then locate the libraries at build time. Otherwise, you will need to modify the config.nmake file located in the main distribution directory to point to the correct location for each library.

Tools that you might need are specific to the operating system in which you need them to run. The Ethereal compile and build process utilizes a number of script files. These scripts will require a number of tools to run successfully. Most of the tools have their roots in the UNIX/Linux operating systems. To compile and build Ethereal on non-UNIX-based operating systems you will need to have access to similar tools.

Windows users will also need to install cygwin. Cygwin is a Linux-like environment for Windows-based computers. It gives both a Linux Application Program Interface (API) emulator as well as a set of Linux-based tools. These tools are what allow the scripts utilized by Ethereal during the build process to work on Windows-based computers. Cygwin can be downloaded and installed from www.cygwin.com.

Windows users will also need to download Python. Python can be downloaded and installed from www.python.org/.

Most UNIX and Linux-based operating systems will include a C compiler and many of the required tools needed to build Ethereal.

The following is a list of tools needed to compile and build Ethereal:

Cygwin Provides UNIX/Linux tools for Win32 developers. This is not needed for UNIX/Linux.

Perl Needed for all operating systems.

pod2man Part of Perl.

pod2html Part of Perl.

Python Needed for all operating systems.

Flex Needed for all operating systems.

Bison Needed for all operating systems.

What Windows tool would a technician use to virtualize two operating systems
Tools & Traps …

Building on UNIX and Linux-Based Operating Systems

Detailed instructions for building the Ethereal binaries from source are included in the INSTALL file, located in the main source directory. Chapter 3 of this book also outlines the build process on RedHat Linux.

Building on Windows-Based Operating Systems

Detailed instructions for building the Ethereal binaries from source are included in the file README.win32, located in the main source directory. This file includes instructions on building on both MSVC++ and Cygwin. It is also important to use CMD.EXE and not COMMAND.COM when attempting to build Ethereal. The program CMD.EXE provides long name support whereas the older COMMAND.COM is limited to 8.3 file naming conventions. Ethereal's source contains long named files and is not supported with command.com.

Windows users may need to update or change the default environment variables for their compiler to locate additional support libraries. For example, when building Ethereal, the wiretap source must include header files for winsock support. It is important that the build process can locate the correct include files. Validate that the following user environment variables are defined correctly:

Include

Lib

It's important to also make sure that cygwin is located in the user path environment variable to locate the necessary cygwin executables during the build process. These executables are the Windows equivalent of necessary UNIX/Linux binaries. For example bison.exe is the equivalent of its UNIX/Linux counterpart bison.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781932266825500155

Infrastructure as a Service

Dinkar Sitaram, Geetha Manjunath, in Moving To The Cloud, 2012

Accessing EC2 Using AWS Console

As with S3, EC2 can be accessed via the Amazon Web Services console at http://aws.amazon.com/console. Figure 2.7 shows the EC2 Console Dashboard, which can be used to create an instance (a compute resource), check status of user's instances and even terminate an instance. Clicking on the “Launch Instance” button takes the user to the screen shown in Figure 2.8, where a set of supported operating system images (called Amazon Machine Images, AMI) are shown to choose from. More on types of AMI and how one should choose the right one are described in later sections in this chapter. Once the image is chosen, the EC2 instance wizard pops up (Figure 2.9) to help the user set further options for the instance, such as the specific OS kernel version to use, whether to enable monitoring (using the CloudWatch tool described in Chapter 8) and so on. Next, the user has to create at least one key-value pair that is needed to securely connect to the instance. Follow the instructions to create a key-pair and save the file (say my_keypair.pem) in a safe place. The user can reuse an already created key-pair in case the user has many instances (it is analogous to using the same username-password to access many machines). Next, the security groups for the instance can be set to ensure the required network ports are open or blocked for the instance. For example, choosing the “web server” configuration will enable port 80 (the default HTTP port). More advanced firewall rules can be set as well. The final screen before launching the instance is shown in Figure 2.10. Launching the instance gives a public DNS name that the user can use to login remotely and use as if the cloud server was on the same network as the client machine.

What Windows tool would a technician use to virtualize two operating systems

Figure 2.7. AWS EC2 console.

What Windows tool would a technician use to virtualize two operating systems

Figure 2.8. Creating an EC2 instance using the AWS console.

What Windows tool would a technician use to virtualize two operating systems

Figure 2.9. The EC2 instance wizard.

What Windows tool would a technician use to virtualize two operating systems

Figure 2.10. Parameters that can be enabled for a simple EC2 instance.

For example, to start using the machine from a Linux client, the user gives the following command from the directory where the key-pair file was saved. After a few confirmation screens, the user is logged into the machine to use any Linux command. For root access the user needs to use the sudo command.

ssh -i my_keypair.pem ec2-67-202-62-112.compute-1.amazonaws.com

For Windows, the user needs to open the my_keypair.pem file and use the “Get Windows Password” button on the AWS Instance page. The console returns the administrator password that can be used to connect to the instance using a Remote Desktop application (usually available at Start-> All Programs -> Accessories -> Remote Desktop Connection).

A description of how to use the AWS EC2 Console to request the computational, storage and networking resources needed to set up and launch a web server is described in the Simple EC2 example: Setting up a Web Server section of this chapter.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597497251000020

Operating Systems

Thomas Sterling, ... Maciej Brodowicz, in High Performance Computing, 2018

11.3.1 Process States

At any one time each activated process managed by the OS exists in one of a number of states, depending on its current condition and activity. These process states are mutually exclusive and collectively exhaustive, in that they fully describe the possible lifecycles of a given process. Different OSs are distinguished in part by the possible process states each supports and employs in guiding the evolution of its constitutive processes, but they exhibit many similarities. Here a relatively simple machine in a fully functional state is considered to illustrate OS-supported process states, as shown in the diagram below. All OSs will include these states or multiple states. For example, the Linux OS presented in Appendix B has a more diverse state structure, but all the states in this diagram can be mapped on top of the Linux state diagram.

When a new process is initiated for the first time for a specified program, it enters into the new state among the process states. In this state the process is being created and the necessary memory objects fully designing the process are being allocated and populated. When this has been accomplished, the process transitions into the ready state. In a symmetric manner, when the process has completed all work associated with it and deposited its results in the appropriate locations for future use, it enters the terminated state. Once in this state, the process is known to have finished execution. At this point the OS modifies its control tables to eliminate the context of the process and reclaim the physical and logical resources associated with the process.

The running state of the process is that condition under which the process is actually executing its instructions on the data associated with it. When running, the process is making progress toward completion of its workload. If in this state it reaches the point of completion, it transitions to the terminated state as described above. However, it is possible that other events will occur and require the process to suspend temporarily and resume at a later time. One of these circumstances can be an asynchronous external interrupt. An interrupt is a signal from any of several sources indicating that another process has immediate priority, such as an OS service routine that must be engaged for the system as a whole to progress. An interrupt will cause the current process in the running state to exit the processing resources (e.g., CPU) and transit to the ready state. Alternatively, if a process in the running state experiences a need to delay because of a wait event or an I/O request that may take tens of milliseconds, then if the process remained in the running state it would waste precious computing resources due to the delay caused by these conditions. Instead, the process will transition from the running state to its waiting state.

The waiting state is that condition of the process assumed when it is unable to proceed immediately with its execution because of a delay of a pending service, access such as I/O requests to mass storage, or a need for user input. Once entered, a process remains in the waiting state until the source of the delay is cleared by some external action (e.g., the arrival of the data requested from secondary storage). In this way other processes can enter the running state and take advantage of the processor resources for greater efficiency of system usage. When the delaying condition is satisfied and the process can proceed forward, it is unlikely that the computing resources are immediately available as one or more other processes are likely to be actively using them. Thus the process that had been in the waiting state transitions to the ready state of the process lifecycle. The OS draws upon the processes in the ready state to select the next process to be placed in the running state. Many processes may be pending in the running state, waiting for their turn to begin executing either for the first time after originating from the new state or resuming, having previously been in the running state at some time in the past. It is typical for a process to cycle back and forth among the three states, ready, running, and waiting, prior to completing its workload and finally entering the terminated state. In this way the user gets the impression that any number of processes are computing concurrently, when in fact they are timesharing the physical resources but switching states so quickly that they all appear to be making progress towards their end computational goals.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124201583000113

Network function virtualization

Riccardo Bassoli, in Computing in Communication Networks, 2020

7.2 Network function virtualization

Fig. 7.2 shows the logic architecture of NFV. The upper layer contains all virtual network functions, which represent the services. Next, these functions rely on the virtual resources that are dynamically assigned to them. These resources can be grouped into computing, storage, and network resources. Whereas the first two groups are focused on making a single VNF running properly, the third kind of resources permits inter-VNF communications and collaborations. Virtual resources represent a projection/mapping of physical resources onto virtualization layers, whereby the former provide computation, storage, and network communications at hardware level. Finally, a vertical layer is responsible for management and orchestration of how hardware resources are mapped into virtual ones and how VNFs communicate with each other and collaborate.

What Windows tool would a technician use to virtualize two operating systems

Figure 7.2. Logic structure of network function virtualization.

Running virtual network functions on a server should generally result in keeping the hosting server continuously on, even if the full resources of the hardware are not all necessary. That characteristic would have led to an infrastructure challenge called server proliferation, mainly caused by increasing numbers of servers used very inefficiently, together with significantly growing power consumption for usage as well as cooling and augmenting expenses to buy infrastructures to host the servers. Virtual machines helped initially to avoid that upcoming significant challenge since it became possible to run multiple functions on the same server using a technique called consolidation. Virtual machines are mainly composed of three parts: i) the hosting Operating System (OS), ii) the hypervisor, and iii) the guest operating system. The first is the OS directly installed on the hardware. The second is a software hosting different VMs and responsible for resource management, monitoring, and managing VMs via coordination with the underlying hosting OS. There can be two kinds of hypervisors, type I and type II. The former is a hardware-based hypervisor, which does not need any host OS, because it directly communicates with the hardware resources. The latter is a software-based hypervisor, which requires a host OS, because it runs on top of the supported OS as an additional layer that interacts with the underlying hardware. Inside the VM, a guest OS runs all the virtual services. Fig. 7.3 illustrates a comparison between three aforementioned logic architectures to realize NFV.

What Windows tool would a technician use to virtualize two operating systems

Figure 7.3. Comparison among different logic architectures to realize network function virtualization.

As it is possible to observe in Fig. 7.3, virtual machines, containers, and unikernels are not the only ways to design and implement NFV. In 2014 an additional solution was proposed by Amazon, called serverless. Serverless computing [148] is a paradigm that allows developers of services to neglect certain aspects, such as server management and provisioning of resources, which becomes the responsibility of the provider of the platform. The common architecture for serverless solutions is mainly composed of five components. First, the storage subsystem is the layer where states or data are made persistent to be shared by different functions (applications). Second, the execution engine is an element that runs on each server that deals with incoming requests: it addresses them by launching respective runtime environments (e.g., a container), with its required libraries, for the lifetime of the function. These containers are classified into cold and warm: the former is a container launched for each incoming request, whereas the latter is a container already active and can be reused by other functions. The deployment of warm containers was implemented to reduce latency due to startup. Third, the message bus and the scheduler constitute the interface responsible to forward messages from front ends to execution engines. Finally, the front end represents the interface for developers and for their applications. Multiple front ends can run behind a load balancer to improve scalability.

Serverless computing has various advantages, such as no need for server and resource management by application, resource efficiency, lower costs, and higher scalability. On the other hand, the main drawback for applications in specific 5G verticals is the significant startup latency, which makes current serverless computing ineffective for low-latency communications. Since the usage of VMs still incurs some overhead to simulate hardware inside a virtual environment, a lighter virtual package was created with low-level isolation and a shared Kernel OS, called container. Virtualization based on containers is more efficient because containers use lightweight APIs instead of hypervisors, which are the elements that introduce major overhead. Finally, unikernels are single-address space machine images constructed by using library OSs, which can run single processes. Unikernels achieve the best performance when compared to the solutions previously mentioned.

The properties of NFV can be divided into three main categories: i) attributes, ii) threats, and iii) means. Attributes were defined as availability (i.e., probability of readiness) and partial availability (i.e., availability in respect to a subset of requirements or users), reliability (i.e., probability of service continuity), survivability (i.e., system-level reliability), and maintainability (the ability to maintain and to repair functional units). Threats were grouped into fault (i.e., cause of system error), error (i.e., system state that can cause a failure), and failure (i.e., deviation of the service from the expected requirements). Faults were classified as physical (i.e., hardware-based fault), transient (i.e., temporary fault), intermittent or sporadic (i.e., recurrent fault), design or logical (i.e., human-based fault made during definition of specifications, design, or implementation), interaction or operational (i.e., accidental fault happening during human interactions with system), environmental (i.e., faults caused by environment where the system is located), excessive load (i.e., faults due to load greater than system capacity), and malicious attack (i.e., faults caused by external attackers).

When a system fails, it has to recover by going back to its original state. The recovery phase can be classified as repair (i.e., fix the component that is under failure) or replacement (i.e., substitute the failed component). Moreover, the process of repairing is performed through the stages of i) detection, ii) localization, iii) isolation, and iv) repair/replacement. The transformation of current hardware-based networks into virtual network based on virtual network functions is expected to significantly increase failure consequences at a low failure frequency, which will be a new important challenge to solve.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128204887000190

PCB Design

Ian Grout, in Digital Systems Design with FPGAs and CPLDs, 2008

3.3.1 PCB Design

Overview

PCB design begins with an insulating base and adds metal tracks for electrical interconnect and the placement of suitable electronic components to define and create an electronic circuit that performs a required set of functions. The key steps in developing a working PCB are shown in Figure 3.15 and briefly summarized below:

What Windows tool would a technician use to virtualize two operating systems

Figure 3.15. Steps to PCB design

Initially, a design specification (document) is written that identifies the required functionality of the PCB. From this, the designer creates the circuit design, which is entered into the PCB design tools.

The design schematic is analyzed through simulation using a suitably defined test stimulus, and the operation of the design is verified. If the design does not meet the required specification, then either the design must be modified, or in extreme cases, the design specification must be changed.

When the design schematic is complete, the PCB layout is created, taking into account layout directives (set by the particular design project) and the manufacturing process design rules.

On successful completion of the layout, it undergoes analysis by (i) resimulating the schematic design to account for the track parasitic components (usually the parasitic capacitance is used), and (ii) using specially designed signal integrity tools to confirm that the circuit design on the PCB will function correctly. If not, the design layout, schematic, or specification will require modification.

When all steps to layout have been completed, the design is ready for submission for manufacture.

PCB Design Tools

A range of design tools are available for designing PCBs, running on the main operating systems (Windows®, Linux, UNIX™). The choice of tool depends on the actual design requirements, but must consider:

Schematic capture capabilities: the ability to create and edit schematic documents representing the circuit diagram

Layout generation capabilities: the ability to create the PCB layout either manually or using automatic place and route tools. Some design tools will link the schematic to the layout so that changes in the schematic are reflected as changes in the layout (and vice versa).

Circuit simulation capabilities: the ability to simulate the design functionality using a suitable simulator such as a simulator based on SPICE.

Supported operating systems: What PC or workstation operating systems are needed for the software tool to operate?

Company support: What support is available from the company if problems are encountered using the design tools?

Licensing requirements and costs: What are the licensing arrangements for the software, and is there an annual maintenance fee?

Ease of use and training requirements: How easy is the design tool to use, and what training and/or documentation is available to the user?

Table 3.5 shows the main PCB design tools currently used.

Table 3.5. Example PCB design tools

Design Tool NameCompany
Allegro® Cadence™ Design Systems Inc.
Board System® Mentor Graphics®
Eagle CadSoft
Easy-PC Number One Systems
Orcad® Cadence™ Design Systems Inc.
Protel Altium™

LVS

Layout versus schematic (LVS) checking is a process by which the electronic circuit created in the final PCB layout is compared to the original schematic circuit diagram. This check is undertaken to ensure that the PCB layout is electrically the same as the original schematic, and errors have not been introduced. LVS can take a manual approach, in which the designer manually checks the connections in the layout and compares them to the schematic connections, or it can be automated using an LVS software tool.

DRC

Design rules checking (DRC) is a process by which the PCB layout is checked to confirm that it meets manufacturing requirements. Each manufacturing process has a set of design rules that identifies the limitations of the manufacturing process and ensures a high manufacturing yield. Design rules are rarely violated, and only then if clearance is given by the manufacturer and the designer is aware of and accepts any inherent risks.

Layout Design Rules and Guidelines

To produce a well-designed and working PCB, design guidelines (should be followed but are not mandatory) and rules (must be followed to avoid manufacturing problems) are to be followed. For example:

Do not violate the minimum track widths, track spacing, and via sizes set by the PCB manufacturer. Table 3.6 provides a set of minimum dimension constraint examples.

Table 3.6. Layout design considerations

Layout considerationMeaning
Internal line width Minimum the width of a metal track inside the PCB structure.
Internal line spacing Minimum the distance between two metal tracks inside the PCB structure.
External line width Minimum the width of a metal track on an outside surface (top or bottom) of the PCB.
External line spacing Minimum the distance between two metal tracks on an outside surface of the PCB.
Minimum via size The minimum size allowable for a via.
Hole to hole The minimum distance between adjacent holes in the PCB insulating material.
Edge to copper The minimum distance from the edge of the PCB to the copper that is designed for use.

Avoid exposed metal under component packages. Any metal under a package should be covered with solder mask.

Make the pads for soldering the electronic components to the board as large as possible to aid component soldering.

Avoid the placement of components and tracks (and ground and power planes) that will require the removal of a great amount of copper from parts of the board, and leaving large amounts of copper in the remainder of the board. Where possible, have an even spread of tracks and gaps between the tracks across the entire board. (The copper layer starts as a sheet of metal covering the entire surface, and an etching process removes the unwanted copper to pattern the tracks.)

Use ground (and power) planes for the component power supplies. Where possible, dedicate a layer to a particular power level (e.g., 0 V as ground). Use split planes if necessary; these are multiple planes on a layer where a part of the layer is dedicated to a particular power level.

Use power supply decoupling capacitors for each power pin on each component. Place the decoupling capacitor as close as possible to its component pin. For example, data converter data sheets normally provide information for the PCB designer in relation to the decoupling capacitor requirements.

Use decoupling capacitors for each DC reference voltage used in the circuit (e.g., reference voltages for data converters). For example, data converter data sheets normally provide information for the PCB designer in relation to the decoupling capacitor requirements.

Use separate digital and analogue power supply planes and connect at only one point in the circuit. For example, a data converter package normally has separate power (VDD and GND) pins for the analogue and digital circuitry. The device analogue and digital power will be provided by connecting the IC to separate power planes. The GND connection is connected at one point only underneath the IC (see Figure 3.16). Data converter datasheets normally provide information for the PCB designer relating to the placement of signal and power connections.

What Windows tool would a technician use to virtualize two operating systems

Figure 3.16. Example data converter GND (“common”) connection (top down)

Minimize the number of vias required.

Avoid ground loops, which can form when the ground connections on the electronic components are laid out to the common track (or plane) so that loops of metal are formed. They can cause noise problems in analogue signals.

For the particular PCB, consider which is more important, the placement of the components or the routing of the tracks? Adopt a layout design procedure to reflect this.

Separate the digital and analogue components and tracks to avoid or reduce the effects of cross-talk between the analogue signals and digital signals.

Ground Planes

Ground (GND) and power planes on the PCB are large areas of metal that are connected to either a power supply potential (e.g., VDD) or the common (0 V) connection (commonly referred to as ground). They appear as low-impedance paths for signals and are used to reduce noise in the circuit, particularly for the common signal. In a multilayer PCB, one or more of the layers can be dedicated to a plane. Any given metal layer can have a single plane or multiple planes (split plane), shown in Figure 3.17. Signals will pass through the plane where the metal is etched away at specific points only, signified by the white dots in the illustration.

What Windows tool would a technician use to virtualize two operating systems

Figure 3.17. Single (left) and split (right) planes

PCBs for Different Applications

Certain PCB manufacturers will provide a range of different PCB fabrication facilities to support different applications including:

High-frequency circuits: Specific materials will be required for the insulating base and the track metal for the circuit to operate at the required frequencies [10, 11].

Power supplies: Power supplies may be required to operate at high voltages and high currents to meet performance requirements.

Controlled impedance: This is required in applications in which the interconnecting track acts as a transmission line and must have a known and controlled impedance. Such applications include radio frequency (RF) circuits and high-speed digital switching circuits.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780750683975000039

How to run 2 operating systems on one laptop?

Dual (or multiple) boot: In this case, we divide the computer's hard drive into multiple "partitions," then install different operating systems in each partition. With a dual-boot setup, the computer must be rebooted to switch from one OS to another.

How to run multiple operating systems on one computer?

Virtualization software — programs that allow you to run multiple operating systems simultaneously on a single computer — allows you to do just that. Using virtualization software, you can run multiple operating systems on one physical machine.

Which operating systems support the FAT32 file system choose two?

Windows NT 4.0 does not support and cannot access HPFS partitions. Also, support for the FAT32 file system became available in Windows 98/Windows 95 OSR2 and Windows 2000.

What are two features of the active partition of a hard drive choose two?

1. The active partition must be a primary partition. 2. The operating system uses the active partition to boot the system.