mpwhdsPutting a company website entails a lot of risks. Despite the numerous problems that arise, service desk software provides proper management of tasks as well as solutions. This kind of software is downloaded and installed for the benefit of the customers and the professionals who are working on the website. Most of the time, problems are repetitive and solving it over and over again provides a lot of hassles. With the service desk software, automated answers are provided to customers as well as to the staff members. This process really saves time and effort considering that it is not difficult to manage the problems that arise unexpectedly.

Moreover, the software easily identifies the root cause of any problems. This is very helpful because once a problem is detected, solutions are provided right away. However, before this happens, the software categorizes all the concerns and prioritizes it accordingly. This is important because even the smallest problem is identified and solved. The software is a great tool to organize the tasks properly and with minimal errors as well. It is a good investment for company websites because it has a lot of features that are beneficial. Just select the best service desk software in order to take advantage of it. Click here for more information.

IT Asset Management Software: An Impeccable Solution Of Business

Asset management is an orderly methodology of working, looking after, updating, and discarding stakes cost-successfully. Elective perspectives of stake administration on the designing environment are the act of supervising resources for accomplishing the most fabulous return (especially functional for beneficial stakes, for example plant and supplies), and the procedure of following and upholding offices frameworks, with the target of giving the best conceivable administration to clients (fitting for open framework holdings). IT stake administration business practices have a regular set of objectives like to reveal investment funds through a methodology change and underpin for key choice making, addition control of the stock, expand responsibility to guarantee agreeability, improve execution of stakes and the life cycle administration, enhance availability time of the business/ applications/ processes.

IT asset management software is a business practice for supervising and improving programming. It best practices help you stay informed concerning your programming so you generally know which stake is running on which machine. Furthermore, IT asset management software guarantees that you have the right number of licenses so you do not squander cash on unnecessary programming or run the danger of permit resistance. IT asset management software helps control the expense of IT stakes with a solitary result that tracks and supervises your equipment, programming and identified data all around their existence cycle. Upgrade IT stake use and IT administration levels: convey not more, not less. IT asset management software closely straightens IT with business necessities through IT possession require and use data.

Let’s face it: hard drive failure can occur at anytime. There is totally no warning for the hard drive failure, and in many cases, it is likely to happen when you least expect it. It is important for you to equip yourself with some information concerning a backup plan and also the need for a reliable plan for the same. Failure to back up your data or information may cost you much in terms of losing important data, which may have been stored on the hard drive.

drive-doctorSecondly, hard drive failure must be detected early enough. If you notice that your hard drive is producing loud clicking sounds when it is powered, do not just look at it without taking action. This may cost you so much. There could be some further damage if you do not take care of this problem as fast as possible. You also need to know that proper handling of your disk drives is very important. 20% of the hard drive failure is caused basically by mishandling of the hard drives. This therefore, means that you should ensure that they are stored under the right temperatures. They should also be protected from water and heat. Do not subject your hard drives to physical damages.

How To Deal With Hard Drive Crash

Nobody is too special to escape hard drive failure. It occurs to everybody’s machine whether an expert or not. However, one can take some safety measures to avoid hard drive failures or crashes. One, ensure that all your data has some back up somewhere else other than a hard drive. You may decide to have many different places where you can back up your data. You can opt for memory cards, which do not necessarily need to be installed in the computer. Alternatively, you can store the data on other storage devices such as disks or compact disks just for safety purposes.

Hard drive failure can also be dealt with by you ensuring that you have your hard drives stored well. Avoid subjecting your hard drives to extreme temperatures, whether hot or cold. This may badly damage the sectors and the blocks. Good room temperatures are good for the hard drives. You should also develop a habit of checking the health status of all your drives. Identify good software that would take you through the process and give you a report on the same.

Finally, hard drive failure can be dealt with by you gathering the necessary information about it. Do not just sit and wait for the experts to come and resolve your issues. You can handle them by yourself and save some money by handling the less complex hard drive failure.

raid-arrays-serverWhen RAID or Redundant Array of Independent Disks first hit the market, a lot of people went frantic about it. The idea of having their bountiful files saved in just one device really made millions of computer users extremely excited. Because RAID is made up of several disks, it is capable of storing huge files. Unlike a single hard disk of which storage capacity is very limited, the space provided by RAID is enough to house an average person’s files. However, this system can also go on the blink. The controller which makes all of its disks functional can also break down. In an event like this, it is recommended to call a technician who is able to recover RAID arrays. Technicians who are certified efficient with executing repair for RAID arrays are employed by established companies. Make a research on highly regarded firms that offer RAID repairs through the internet. This facility can present you with hundreds of companies with this specialization. You can compare prices and elect for one that offers the service at the most reasonable price. Reviews and customer testimonials are great references you can use to locate a reputable RAID repair agency.

You cannot keep your files forever unless you make an effort to really take good care of them. Many people are now users of RAID or Redundant Array of Independent Disks because this concept can keep huge files. This system is made up of several hard disks which are run by a controller. Its capability to store numerous data is unbeatable. However, this attribute can also lead to some serious problems. RAID is often associated with data loss or deletion because of the volume of files it carries.

Unfortunately, one needs to hire a professional in recovery RAID array in case the disks stop running. This service is noticeable more expensive compared to repairing an ordinary single disk because of the complexity of the task. But if you really want your disks to get back to its working condition, you would not mind spending a few dollars for it. Not all technicians of RAID array can guarantee recovery of lost data. But the likelihood of having these files back is a big possibility if you will be prudent in choosing who to seek help from. Always go for a trusted technician and make sure he is part of an established company that has proper testing facilities.

What You Need to Know About Recovering Raid Array Drives

Controller breakdown is one of the most common errors that you can experience with Redundant Array of Independent Disks. Although it has been proven that RAID is a very helpful system that enables computer users to save sizeable files with ease, this can also prevent you from keeping your data protected. The disks that make up the system can be abused and when it happens the entire system will fail to deliver its function. Losing your data is the worst part of this scenario. Recovering RAID servers is a service that most computer repair companies have in store for people who will bump with this circumstance. This is a service that not all technicians can perform. One has to undergo intense training to be able to fix RAID errors. When hiring a technician to fix your RAID drive issue, make sure you are not blinded by low-cost quotations. It is understandable that you only want to be practical. But in a case like this, where your valuable files are at stake, you should be willing to pay the right amount. If you want to know about how much this particular service can cost you, make use of the internet and research on the price range that RAID repair is commonly offered.

RAID arrays really made my work a lot easier and I can see why they are beneficial. I did not have to save my files on multiple storage devices as the computer alone can store all of it. I never mind not having a back up file since RAID works perfectly for me and I do not think I will have any problem with it. But just this morning, a friend of mine asked me if I know of a company that offers recover RAID array. She told me that her computer surprisingly stopped working when she has not saved her files just yet. I believe those files are really important for her work and her boss is actually waiting for it. I really feel sorry for her. But her experience has thought me something really important. That is to have my files protected at all times by securing a backup copy. Even if RAID is a very helpful system, it also gets ruined. I hope she finds a reliable firm to help her recover her files immediately. She told me she is broke at the moment and this unforeseen circumstance only made things even difficult for her. I really wish she can have it fixed at a reasonable price.

Laptop HD recovery was once considered very tough and almost impossible in the past. This is not the case anymore because there are lots of professional people that work in this field and they can recover data from corrupt drives very easily. The time of HD recovery depends upon the kind of failure that your hard drive has. If your hard drive is facing software failure then, it can be resolved even in an hour. You also have to choose the recovery service carefully because there are lots of rookie technicians working in this field. Always trust a thorough professional that has experience in this field. Look for some successful cases that he has handled. There are very specific tools that are used in this field and most of the times people make these tools themselves. There are no ready-made tools for HD recovery and when it comes to recovering from hard ware failure then it takes lots of experience and very thorough knowledge of inside circuitry. They may charge you extra for hardware3 failures and if you have some critical data that you always want then you should never hesitate to pay the price. Just make sure that you are dealing with a legitimate and qualified person.

Don't freak out if your drive fails. Get help!

Don’t freak out if your drive fails. Get help!

Reuse Your Hard Drive After HD Recovery

Hard disk can get corrupted due to lots of reasons and a computer becomes useless without a hard disk. It is always better to back up your critical data because reliability of modern hard disk is very less. There are two kinds of issues that your hard drive can face and one of these issues is software problems. Software problems are more frequent and there are easy solutions available to these problems. There are professional HD recovery firms working and these firms can recover data from all kinds of software issues. They have tailor made software for all kinds of issues and they guarantee HD recovery. Hardware failure can also occur in your hard drive and when hard drive failure occurs, you need different set of tools to diagnose and repair it. Modern HD recovery firms not only give you your lost data back but they also give you your hard drive back in working condition. You can reuse that hard drive for a long time. Just make sure that cost of HD recovery is not very high and also time of HD recovery should not be very long. Keep these things in mind and make sure that you do a timely HD recovery.

Instant Methods Of Recovery

Hard disks can get corrupted due to lots of different reasons and the nature of the failure is also different in different scenarios. If your computer is still working but few drives of your hard drive are not responding then the problem becomes easily resolvable. There are many free versions of HD recovery software that you can use. Most of these software have the capacity to solve minor problem easily and you can get your lost data back easily. If these software do not work on your hard drive then you can turn towards professional help. There are many recovery firms that offer their services with a very affordable price. Especially when the problem is not very complex then they will charge you very less. These firms will give you guaranteed solutions and you can always trust them. If your hard disk has been corrupted due to a hardware failure then the problem can be very complex. You will need to spend lots of money to resolve hardware issues of your hard disk. There are very expensive and specific tools for HD recovery when it comes to hardware issues. It is very lengthy and time consuming work that can take even weeks to get done.

Whenever most people experience a hard drive crash they end up becoming worried and they end up taking desperate measures to get their hard drive back to work. It does not always work and you can end up damaging the hard drive more and lose all your data that was on the hard drive. It is essential to note that a hard drive crash is a good possibility for any computer and there is nothing much you can do. The most you can do is to delay the hard drive crash. There are a number of ways you can tackle this problem that can knock on your door at any time.

SSD drives can fail also!

SSD drives can fail also!

You can get the software that is available on the market to help you out with your problem. There are some software products that are free versions and others that need to be purchased for the same purpose. There are very many service providers that recover hard drive crashes and all you have to do is choose a reputable service provider to see you through your problem. Most often people with the problem are torn between using the free version or the paid for version of the data recovery software that will handle their hard drive crash.

Identifying Hard Drive Crashes

As a computer user, you will at one point or another experience a hard drive crash. The reasons for such a crash are varied, and can be virus attacks or system malfunctions. When such things usually happen, we get worried, and panic at the thought of losing all the important files that were saved. If you have no deep knowledge about computers and how they work, the first thought will be to call a computer repair shop. It’s a good move on your part, since you eliminate any risk of further damage to the system.

For the experienced user, it is good to find out what caused the hard drive crash. If the crash was as a result of physical damage, then an expert is the best bet at repairing it. Physical damage makes the components very difficult to figure out, and so, only an experienced professional can handle it. It is advised that you do not move the hard disk, in case doing so will cause some parts to dislodge or break even further.

If the hard disk crash was a result of logical problems, for example an operating system malfunction, then you can do the data recovery process yourself. With the help of downloadable software, you can access the hard disk, recover the data, and even repair the bad partitions.

We all experience bad situations from time to time. They are meant to test our resolve and ability to handle tough situations. In many cases, trials make us stronger, and teach us to never repeat the mistakes that led to the situation in the first place. In this particular case, I am talking about a hard drive crash. If you have ever experienced one, then you know it does not take a lot of time for the situation to go from bad to worse.

In many hard drive crash situations, the data can be recovered at first. However, as is human nature, we try and go about the recovery process without the proper information and tools, and end up messing up even further. We often try to avoid the high costs associated with data recovery, and choose to take the easy way out, which is usually a quick fix. This certainly does not work in almost all cases, and ends up being even more expensive than it would have been if a professional had been contacted in the first place.

In any hard drive crash situation, it is important to leave the hardware as it is, since moving it can complicate things further. If you cannot troubleshoot the problem, call in the experts. They will know what to do, and will have the data recovered in a short time.

The time when you lose your hard drive sometimes cannot be for you to tell. It therefore is important to have measures in place that will make the problem easier to handle for you in terms of preventing or detecting the impending failure. To guard against these crashes, the first thing that the computer user needs to do is to know all the actions that if taken can lead to a crash. This means that he ought to know all the best ways of installing and uninstalling programs and files, for example.

Also, the computer needs to have viral protection. There are a number of antiviruses on the market today which can be used to protect the system from corruption. This will in extension, also safeguard the hard drive. This might even mean that you strictly control access to the computer.

To have a better shot at guarding against a hard drive crash, there are programs that you can use to have the computer monitor itself. In essence, these are programs that will have the ability to report the working condition of the hard drive. With this information, you can then take the necessary steps to have your files and documents moved to safer storage locations. This can save you lots of time and even money.

There are several factors why a hard disk may fail and it varies from electronic to mechanical factors. A power surge is the most common causes of hard disk problems. This is why it is advisable to use an auto-voltage regulator or UPS. But more than this, a hard drive crash can be caused by mechanical reasons. Your computer and its hard drive are mechanical devices and no matter how you take care of them, they are prone to wearing out. Statistics say that about 60% of hard disk failures are caused by mechanical issues. A hard drive crash may happen when the computer has been jostled or bumped while running; when there are bad bearings; when there is extreme heat and the circuit board fail; and when there is sudden power failure while the disk is working.

You can prevent a hard drive crash from causing serious hard disk problem, though. Just watch out for signs of hard disk troubles. Clicking or grinding noises while the computer is running is one of the signs of hard disk failure to watch out for. When your files kept on disappearing mysteriously, then you might have a hard disk-related problem. When a computer freezes and forces you to do a hard reset – and often – then, you might want to check on the hard disk. Noticed that your computer is unusually hot? This is also one sign of a hard drive failure. Be sure when you experience a hard drive crash, to contact an expert such as Hard Disk Recovery Services of Irvine, CA.

Great Software For Hard Drive Recovery

Hard driveWith the advancement of software and computer technology, a hard drive crash is not the nightmare it may have been a few years back. That is, if the damage is not physical, in that case you have no option but to call an expert to help you out. But physical damage, even on its own, takes time to occur. If you take care of the hard disk and secure it safely, then physical damage will be something you only read of in magazines and on the web. In this case, I am talking about logical damage, such caused by unreadable boot files, and bad partitions.

When the hard drive crash is as a result of logical problems, it can be fixed with the help of software tools. It is good to have a trial run with such software before deciding on which one to use. To help in this, software companies offer trial versions of their products, so users can get to test them. In such a case, having the opinion of others who have used the software before helps a lot. They will tell you which is the best software to use, and why.

Software tools for recovering data after a hard drive crash are mostly free and easy to use, so take advantage of them today.

Password age is an important setting that many managers tend to ignore, especially in smaller networks. That’s a mistake, because invoking this option forces users to change their passwords regularly.

secureBut keep your accounts under close surveillance for unusual usage statistics. For that matter, a regular inventory of all your user accounts is a good idea, especially in larger networks where more than one administrator has the right to add or remove users. Delete dead accounts and passwords.

2. Organize and document user permissions in functional groups.

Many network managers often neglect the importance of each user name. This is an extremely important facet not only of network security but of network organization in general. Be careful during the process of assigning user names and be sure to document the process.

User names are as central to the NT security model as passwords are. Proper user names and passwords work in conjunction during the logon process to ensure legitimate access and grant validation. We’ve heard of network managers allowing users to pick their own user names, but that practice certainly isn’t recommended. It’s best to take care of assigning user names yourself.

First, the user name needs to be unique within its own NT domain and should be unique to the network in general. And just like with passwords, don’t be afraid of longer names–you’ve got 20 characters to work with in NT. Once you’ve assigned user names and passwords for full network access, you can begin to implement NT security by placing users into groups, each of which will have its own set of permissions. This makes organization easier and has the side benefit of allowing you to set user permissions en masse. You can then increase group resources fairly easily by defining appropriate “trust” relationships.

3. Change the name of the Admin user account.

Anyone who has installed NT Server knows that the system default installs a powerful “superuser” account called the Administrator account. You can’t delete this account, and its password by default is blank. This is a favorite hacker trick, and one that is fairly easy to exploit. You’d think this trick could be blocked automatically by a trained administrator, but that’s just the problem with NT. Because it’s so easy to use, lots of first-time network builders leave even obvious doors like this one open.

You can get sneaky and change not only the password, but the account name as well. Make it difficult for a hacker trying to figure out which account is the Admin account.

Another useful trick takes this one step further. After you change the name of the original Admin account, create a new account and call it Admin. Give it a good password and absolutely no permissions for network resources.

4. Set lockout limits and activity logging.

Users typically hate lockout limits, but they’re invaluable for keeping your network secure. Lockout limits deactivate network accounts after a certain number of failed attempts to log in, due either to a faulty user name or a faulty password. The usual number of tries before lockout is three, but this option must be activated by you to take effect. It also means that someone with rights to activate user accounts must be present during business hours in case a legitimate user manages to lock himself or herself out. You’ll need someone with those rights to reactivate the account; otherwise, it’s useless for at least 24 hours. Even this option is risky. Unless you’re watching for hack attempts, an outside predator could attempt access three times a day without anyone being the wiser. It’s best to allow only an administrator to reactivate locked-out accounts.

To keep track of account lockouts and other possible signs of hacker activity, you need to activate account logging. Just head for the Policy menu in the User Manager for Domains (standard Admin Tools). You need to check the Event Log regularly to find them. The system won’t alert you.

5. Make good use of NTFS.

For server machines in an NT environment, you’d be nuts to use anything other than NTFS. You’ve got the option to use FAT, but you can’t protect files on FAT volumes. NTFS has built-in security options to protect your files. NTFS supports file and directory security, and this security policy can efficiently address very large disk volumes. If that’s not enough, NTFS is Unicode-compliant and has built-in features that will protect your data in case of system corruption or failure.

On a new disk, an NTFS volume extends full control to file permissions to the group Everyone. Any subsequent directories or file additions inherit these permissions, so you’d be well advised to change these defaults after formatting any new NTFS drive. You can do this by clicking on the Other option in the Security tab.

6. Lock it up.

Physical security is another overlooked method of keeping your network secure. Leaving servers, hubs, switches, and remote-access modems out in the open is common practice at many small and medium-size network sites, due mostly to space constraints. It may cost a bit to keep these components under lock and key, but if you’re paying attention to network security, the benefit will be huge.

Let’s face it. Most hackers don’t gain access by dialing in and zapping test passwords at your system. These guys do research, and that often means snooping around your office. And what about disgruntled employees, malevolent building-maintenance workers, or even security people? The list of folks who have access to computers you leave out in the open is long and often undistinguished. Cutting down on physical access is mandatory for a truly secure network.

7. Install Service Pack 3.

Windows NT is presently on Service Pack 3, with number 4 in beta (We recommend avoiding that one until it ships.) Service Pack 3 should be installed on all your servers and consistently reinstalled every time you add new software or hardware. The security benefits provided by SP3 are huge, including Server Message Block (SMB) Signing, password filtering, and the ability to restrict anonymous users. It also lets you use a system key to encrypt your password data.

SMB (also known as the Common Internet File System) is an authentication protocol that gives access to mutual authentication, which means both the client and the server must agree on who they are before a connection is established. Similarly, password filtering lets you force restrictions on user password choices. Even more critical is anonymous-account management. Windows NT uses anonymous accounts for things like RAS communications, which certainly implies a high security risk. With SP3, you get the ability to restrict access to these accounts. The SP3 system-key feature is great. Once this is in place, you won’t be able to boot the NT server at all without the key. Even better, these keys use strong encryption to protect all the password data stored in the Security Accounts Manager (SAM) database.

8. Stay on top of specific Microsoft security-hole fixes.

Although service packs are critical to server health and security, they aren’t all that timely. New versions are released periodically, but hackers work on a much speedier and less predictable timetable. Fortunately, the folks in Redmond keep a sharp eye out for new hacker attack methods as they are discovered and reported. Solutions and hot fixes to these problems are posted on Microsoft’s Web site and are also published monthly in the TechNet CD-based support series (a service we highly recommend you check out if you work with Microsoft operating systems on a regular basis).

9. Secure the NT Registry.

Windows NT’s Registry is by default more secure than that of Windows 95/98. You can assign key security to the Registry as you would to a disk volume. This blocks general outside access, and you can even assign administrative rights to users or groups that do have Registry access.

But even with appropriately set Registry access settings, there are other problems. For example, users recently discovered a major security hole within the NT Registry. This problem revolves around security keys that assign specific programs or services to run automatically after the server boots. Under a default Windows NT Server installation, all users have access to these keys, making it relatively simple for hackers to set one to run one of their own programs every time a server boots. You can find a fix for this problem by contacting Microsoft, checking your TechNet CD (posted May 1998), or looking it up in the Microsoft Knowledge Base at www.support.microsoft.com.

10. One for later: Perform a security audit.

Remember that security is not a fixed asset; Staying current and avoiding future problems requires a regular security audit.

Normally, this is a time-consuming, specialized, and difficult task–even in Windows NT. Fortunately, there are excellent third-party applications available to help make this process easier and even automate it to some degree. Two great examples of such software are Security Dynamics Technologies’ Kane Security Analyst (KSA) for Windows NT and RealSecure 2.0 from Internet Security Systems (ISS). Although RealSecure does offer this kind of functionality, it protects you from hacker attacks by means of its database of more than 100 NT defense mechanisms.

KSA will check your NTFS volumes for proper permission controls, data integrity, and overall security. If you want to make use of automated auditing, it’s simply a matter of defining what areas you want KSA to check on a periodic basis. Once that’s done, the system will provide you with a series of regular reports on the security status of your network that cover six general categories: password strength, access control, user-account restrictions, system monitoring, data integrity, and data confidentiality. Remember that KSA is limited as to what kinds of suspicious activity it will alert you to. That means you’ll have to take the time to read these reports and make decisions based on their information.

RTOS vendors have moved quickly to gain a piece of any of these potentially lucrative markets for computer-like consumer devices. Integrated Systems Inc. (Sunnyvale, Calif.) is already shipping its pSOS operating system in car-navigation systems, digital cameras, phones and Sony’s satellite set-top box. Wind River Systems Inc. (Alameda, Calif.) is shipping its VxWorks in satellite set-tops and digital cameras, and is working with Nortel on Internet-connected phones. The RTOS from QNX Software Systems Ltd. (Kanata, Ont.), which senior technical analyst Mal Raddalgoda compares with Windows CE in his article later in this section, is finding use in set-tops, DVD players, VCRs and telephones.

ceVRTX from the Microtec division of Mentor Graphics Corp. (Wilsonville, Ore.) is targeted at many of the same applications. Set-top-box maker General Instrument (Horsham, Pa.) is using it in its new DCT-1000. Jack Birnbaum, manager of digital technical software development at General Instrument, discusses the multimedia system later in this section. Scientific Atlanta (Norcross, Ga.) uses PowerTV’s PowerOS in its digital set-top, and the OS is also being implemented in TVs, network computers and DVD devices.

Microsoft Corp. has seen the main market for its operating systems-the desktop PC-flatten in terms of market penetration. While its Windows NT has made significant inroads into servers, the volumes in neither market match those of the emerging consumer-computing arena. So, it comes as no surprise that the Redmond, Wash., company is looking for new worlds to conquer.

After its first foray into computing for consumer electronics, Windows CE 1.0, met with yawns from the embedded community, Microsoft rolled a second spin. A third spin, Windows CE 3.0, is due out in the third quarter of next year.

WinCE 2.0 comes closer than 1.0 to its RTOS competitors in terms of memory space, interrupt response and determinism. And although it is still a far cry from the capabilities of many RTOSes, CE 2.0 packs more than enough of a punch for first-generation consumer computer applications such as handheld and palm-sized PCs, where companies such as Casio, Everex, Hewlett-Packard, LG Electronics, Philips and Sharp have already pressed it into service. CE is even sturdy enough for WebTV, which sits alongside the TV set accessing the Internet at 28.8 to 56 kbits/second over existing analog phone lines.

The big question remains: Does CE have sufficient interrupt response and determinism for use in tomorrow’s all-digital TVs and set-top boxes, which will be linked to the Internet by all-digital ADSL phone connections and cable mo- dems with data rates ranging from 2 to 30 Mbits/s? A number of embedded designers who have looked at the OS in terms of its applicability to hard real-time applications suggest it does not.

One reservation voiced by several designers interviewed about WinCE 2.0 concerned threads and thread priority, and the way the OS manages them. For one thing, while CE is preemptive and multitasking, it is limited to supporting only 32 processes simultaneously. That’s enough for small systems, but this limitation can be a problem in larger systems like those found in the telecom/datacom or office-automation markets.

With the current implementation, Microsoft’s documentation says there are eight priority levels available. But only seven of them are really useful, designers said. The lower levels are idle and barely usable by applications and threads, since they can execute only after all other priorities are complete. This makes CE a solution for soft real-time applications only, and then only if an engineer is very careful in the way he or she writes the code or implements the hardware.

By way of contrast, most RTOSes have as many as 256 levels, all of which can be used in any hard or soft real-time situation. In addition, having only eight priority levels requires that multiple threads share a single priority level, managed by a round-robin scheduler. This scheduling technique is hard on system performance and makes the system determinism extremely difficult to evaluate.

Time-slice system

Also, according to Microsoft specs, levels 2, 3 and 4 are dedicated to kernel threads and normal applications. But what the documentation does not say is that appli-cations cannot be prioritized against each other or against kernel threads. Since there are only eight priority levels, the number of threads per level will usually be bigger than the number of priorities.

This situation will make the time-slice scheduler predominant over the priority-based scheduler. Consequently, the overall system will act more like a time-slice-based scheduler rather than a priority-based system, limiting its effectiveness even in soft real-time applications.

By comparison, with an exclusively priority-based scheduler the only requirement is the need to establish priority between threads. Once established the scheduler will use these priorities to run the threads. Since thread completion (by blocking or dying) is requested before switching to another one, no additional context switches are needed.

Engineers targeting real-time Net-centric applications in the near future should also be concerned about the way WinCE 2.0 handles interrupts. The OS lacks nested interrupts, which are almost de rigueur in embedded systems. The upshot: an interrupt cannot be serviced while a previous interrupt is being processed. So, when an interrupt is raised in WinCE 2.0 while the kernel is within an interrupt service routine (ISR), execution continues to the end of that ISR before starting the ISR for a new IRQ (interrupt request line). This can introduce latencies, or delays, between the time of the hardware interrupt and the start of the ISR.

The bottom line, according to several developers, is that no matter what improvements Microsoft makes to the underlying kernel, WinCE 2.0 generally does not handle interrupts efficiently and the model is not well-suited for real-time. For one thing, interrupt prioritization is not respected. Interrupts are taken and processed by the kernel in the order of arrival. Once the kernel accepts an interrupt, this interrupt cannot be preempted by one arriving later.

The only time an interrupt can preempt another is when one interrupt service thread (IST) has a higher priority. But since WinCE 2.0 provides only two levels for IST priority, high and low, this feature is something of a blunt instrument. For systems with more than two devices generating interrupts, ISTs will have to share priority levels, and priority between devices cannot be guaranteed.

A second problem is context switches that are needed to process an interrupt. Once an interrupt is generated, the kernel performs at least three context switches: one when the processor receives the interrupt, a second when the IST is started and a third when the thread processing the data is activated. For a system with a low interrupt rate, this kernel will work perfectly. But once the rate rises, the system will completely fall down.

Intent on being a player in the real-time world, Micro- soft has vowed to improve Windows CE 3.0, adding faster thread switching, nested interrupts and more priority levels. The 3.0 kernel will require no more than 50 microseconds when switching between high-priority threads, Microsoft said. The ability to nest interrupts will make it possible to respond to higher-priority interrupts while servicing lower-priority ones. The new kernel will also support more than WinCE 2.0’s eight priority levels.

This past summer, Microsoft released Windows 98 to a far more muted fanfare. Instead of prompting people to “Start it up,” the update to Windows 95 modestly promised tighter integration with the Web, a few performance tweaks, a slew of bug fixes, and an almost imperceptibly improved design. Because Win 98 looks a lot like its predecessor, and because many of its enhancements are actually found in Microsoft’s separately available Internet Explorer (IE) 4.01 Web browser, moving from Win 95 to 98 isn’t nearly the profound experience the previous upgrade was.

win98Of course, that makes the decision of whether to upgrade to Win 98 even tougher. To help you decide, we’ve gone under the hood to uncover the differences between Windows 95 and its newer, spiffier sibling. We point out what’s been improved, what remains the same, and what’s still sorely lacking. Then we discuss whether you should pay the $90 and move to Windows 98 right away, or wait until you buy your next computer and get Win 98 preinstalled.

A New Coat of Paint

At first glance, the new version of Windows resembles the old. In fact, you have to look carefully to notice any cosmetic differences, although you’ll notice that Windows 98 has added a few more icons to the default desktop. Along with the My Computer, Recycle Bin, and shameless Setup the Microsoft Network icons, we found other icons that can help you with your work. For instance, someone at Microsoft wised up and placed the My Documents folder, where we like to keep our Word and Excel files, right on the desktop and in the Documents menu in the Start pop-up menu. If you use Microsoft Office 97 under Windows 95, the office suite creates the My Documents folder on your hard disk, but you have to launch the Windows Explorer file manager to find it. The new operating system puts the default file-save stash in plain view for quick and easy access. Pretty smart.

However, for some reason, Microsoft still keeps Windows Explorer–as opposed to Internet Explorer–off the desktop. For three years, each time we reviewed a new Win 95 PC, we had to drag and drop the Windows Explorer icon onto our desktop so we could launch the file manager whenever we wanted. You must do the same thing with Windows 98. Although you can now technically use IE as a file manager, we prefer to use Windows Explorer to navigate through the files on our hard disk.

The Taskbar in Windows 98 has been slightly tweaked. The Start button still resides in the left-hand corner, and in the right-hand corner you can still find the System Tools tray holding a digital clock plus icons for the programs automatically launched when you start up your PC. But next to the Start button, where Windows 95 displayed icons of your launched applications, you’ll now see what’s called a Quick-Launch bar containing four small icons–three of them, as it happens, for the Microsoft Internet tools that have launched a flurry of Windows 98-related antitrust action. These include the IE 4.01 Web browser, the Outlook Express email program, and a special icon for viewing channels (Web sites to which IE 4.01 users can subscribe to automatically receive updated content).

Although we love the fourth mini-icon–for Show Desktop, a nifty tool that minimizes all your open applications with one click so you can swiftly return to a clean screen if interrupted by prying eyes–you may find the others intrusive. Because we’re dedicated Ecco Pro users, we deleted the Outlook Express icon from the Taskbar by right-clicking on it and choosing Delete. Contrarily, if you’re a fan of Microsoft freebies, you can drag and drop icons from the QuickLaunch bar onto the desktop.

Just Browsing, Thanks

Between the launch of Windows 95 and Windows 98, a seismic event called the World Wide Web greatly affected the way the new Windows works and looks.

Several key components in Windows 98 look and operate much like a Web browser. According to Microsoft’s usability studies, nontechnical users have an easier time using a Web browser than the rest of Windows itself. Armed with that info, the company’s programmers turned the new OS into a virtual browser.

Need an example? Look no further than Windows Explorer or Control Panel. These modules once resembled their stodgy Windows 3.1 File Manager and Control Panel counterparts: a box full of icons. Now Windows Explorer looks like IE 4.01, complete with browser-like Forward and Back buttons in the toolbar and an address line where you can type either a DOS-style file address, such as C:\Program Files\Quicken, or a Web address to jump to an online site.

Not only do Control Panel, My Computer, and Recycle Bin offer views resembling Web pages, they also include a smattering of HTML code to give you more information about the contents of folders along with their bare-bones property lists. Move the mouse pointer over a disk icon, for instance, and you’ll see how much free space the disk has. We especially liked the warning that appeared when we clicked on the Windows folder, stating that we could potentially cause programs to stop working correctly if we modified the folder’s contents.

Changing to a traditional icon or list view is as easy as clicking a pull-down menu, but the browser metaphor is certainly snazzier and more helpful than the cut-and-dried look of Windows 95.

Nothing But Net

Windows 98 shines when working with the Web. In our admittedly unofficial tests, Internet Explorer 4.01 felt more stable and less prone to crashes than previous releases of IE, just as the OS itself did compared with Windows 95. If the majority of your home office tasks take place on the Web, the newlywed pair of IE 4.01 and Win 98 is worth a long look.

But legions of Windows 95 users are already browsing with IE 4.01, available free for the downloading from Microsoft’s site (www.microsoft.com/ie) or third-party libraries such as c|net’s www.browsers.com. If you’d like to join them, be prepared to set aside at least 50MB of hard-disk space to store the new browser. Expect a slight performance hit as well–IE 4.01 isn’t ideal for slower, older PCs.

Call us free-market radicals, but we still prefer Netscape’s Navigator browser (www.netscape.com)–which, like Internet Explorer, is readily available for free download. If Navigator is your favorite way to maneuver through the Web, it’ll work fine with Windows 98, but you won’t be able to use the OS’s seamless file management/Web browsing features.

Although the integration of Windows Explorer and Internet Explorer is almost enough to make us give up Navigator, we’re not in the least tempted by the Active Desktop feature that represents Microsoft’s exceptionally pushy brand of push technology. By default, the Windows 98 desktop displays a vertical Channel Bar of Web-site icons. Most of these take you to Microsoft-related sites such as the Microsoft Network and MSNBC. A few featured sites don’t belong to Microsoft (Warner Brothers, America Online, Disney, and PointCast), but you can be sure those companies paid hefty fees to appear in the Channel Bar.

Automatic delivery of favorite-site content sounds great in theory, but Web searching in the background while you try to work in the foreground is a notorious system hog. Unless you have an ultra-fast Internet connection and at least 64MB of memory, your MMX Pentium PC may start to feel like the 386 desktop you bought eight years ago.

Let’s be honest: Does the world need another invitation to try a taste of AOL? We think the regular AOL disks that litter the environment are enticement enough to try the online service, and find the Channel Bar to be overkill if you’re accustomed to using Web browser bookmarks to check your favorite sites. Fortunately, you can turn off the channels and get to work.

Performance Anxiety

PC processors have grown many times faster over the past few years. Software hasn’t. In fact, applications have become heavier and more bloated with extra features, multimedia clips, and animated help files. And while you wait for Windows 95 to launch, you may have wondered why you paid for a PC with a blazing chip.

Microsoft has streamlined Windows 98 to load and shut down quicker, and tweaked Disk Defragmenter to rearrange application components on your hard disk to launch faster (after you’ve opened and closed them a few times for Win 98 to monitor their behavior). However, you’ll need an advanced Pentium II-class system and ample memory to take advantage of these performance boosts. If you have an older machine, you’re out of luck–Windows 98 will perform about the same as Windows 95.

For this review, we used a Gateway Solo 2500 LS notebook with a 233MHz Pentium II processor, 64MB of RAM, and Windows 98 preinstalled. Although we had no complaints about launching such typical sluggards as Microsoft Word 97 and Excel 97, we still had to endure a tedious wait when we powered up the notebook.

If you’re still waiting for the day when Windows appears before your eyes as quickly as the image when you switch on a TV set, you’ll need one of the new PCs starting to appear with special fast-boot BIOS ROM code. Or you’ll have to pay a slightly higher electric bill and adopt a desktop that, like the newest Compaq and Hewlett-Packard consumer models, can enter a notebook-style “sleep” mode instead of actually switching off. On such a computer, Win 98 can schedule tasks such as defragmenting the hard disk or downloading Internet pages while you sleep.

To improve performance in a broader sense, Windows 98 provides a ton of new drivers for up-to-date peripherals, which will be immensely helpful when you add hardware such as a DVD-ROM drive to your system. Some other new hardware-related capabilities are flashy, but less universally useful: Got a spare monitor? You can finally mimic the Macintosh desktop-publishing trick of using two displays simultaneously (for, say, your page layout and your toolbar palettes). Got an ATI Technologies TV tuner or All-in-Wonder card connected to your cable TV outlet? As of press time, you’re the only customer for Microsoft’s TV Guide-like WebTV for Windows 98 viewer.

By contrast, we’re truly impressed with the operating system’s support for the Universal Serial Bus (USB) standard, which lets you simply plug a device such as a scanner or printer into a USB port without having to install an internal SCSI card. With USB, the promised days of plug-and-play will have finally arrived. It’s about time.

Two other under-the-hood improvements that earn our applause are FAT32 support and the new Windows Update feature. The former replaces DOS’s ancient disk format with a 32-bit File Allocation Table–a more efficient formatting scheme that stores files in smaller chunks to reduce wasted or slack space on your hard disk.

With the latter, you can click the Windows Update icon at the top of the Start menu and dial into a special Microsoft site where an Update Wizard automatically downloads and installs software updates, bug patches, and other assorted fixes to your operating system, drivers, and applications. Think of it as a regularly scheduled tune-up for your PC.

The Final Verdict

And now the real question: Should you upgrade to Windows 98? The answer isn’t crystal clear.

If you live on the Internet, enjoy the simplicity of using a Web browser, and daily move among and manage scores of downloaded HTML pages and files, then we wholeheartedly recommend making the move to the new Windows. For Webaholics, Windows 98 can help tame the onslaught of online data and speed up your computing chores.

Similarly, if you have a recent PC with USB ports or plan on upgrading your old one to take advantage of USB peripherals, then run, don’t walk, to Windows 98. It’s the ideal solution for avoiding potential technical problems down the road, especially when combined with Windows Update-PC, heal thyself. You’ll also find Microsoft’s performance tweaks have diminished the system crashes that plagued Windows 95, but you shouldn’t expect an entirely crash-free version of Windows–ever.

But if Windows 95 works fine for you and doesn’t crash often, and your Web activity is going well enough, then stay put: Windows 98 will be an insignificant upgrade. Microsoft says that any future software upgrades designed for Windows 98 will work under Win 95, and vice versa–including, we hear, the upcoming Office 2000 productivity suite.

If you still use Windows 3.1, you’re long overdue to take advantage of the current versions’ improved interface and the extra capacity of 32-bit computing, but the catch-22 is that your hardware likely isn’t up to Windows 98’s standards. For you, it probably makes more sense to take advantage of today’s thrifty PC prices by trading up to a brand-new notebook or desktop.

If you do buy a new system, the OS decision may be out of your hands: You’ll get Windows 98 rather than 95 preinstalled. This is the ideal way to graduate to a new operating system, because the PC manufacturer installs it from scratch and tests its performance with the hardware and software you’ll be using. With a new operating system on a virgin hard disk, the chances of clashes with leftover drivers, libraries, and other bits of remaining code from previous versions of Windows are near zero.

The bottom line on Windows 98? Microsoft has created a nice update to Win 95, but with nothing that screams “Buy me! Install me now!” as the latter did at its debut three years ago. Don’t get us wrong–Windows 98 will remain on some of our test systems. But for our mission-critical machines, we can wait for the inevitable bug fixes and add-ons that didn’t make it in the initial release. Windows 98 will be here for a while…what’s the hurry?

The solution, of course, is a disaster recovery plan and backup performed by Hard Drive Recovery Group – http://www.harddriverecovery.org/ . But providing functional duplication of the key SP2s would mean incurring tremendous costs. So when IBM offered an alternative — its new 64-bit S70 — it took almost no time to make Holder a buyer.

“Economics was the key driver,” he says. “We brought in an S70 on a trial basis, and the loading of the apps was so much faster that it was obvious, even without tuning, the performance was there.”

Without bothering to do a detailed analysis, Holder concluded that a single S70 server would be sufficient to back up several SP2s and, down the road, would be able to scale well beyond current plans to increase capacity by installing even more SP2s in the future.

Now, the S70 hums away in a remote, unattended operation near Toronto, ready to shoulder the burden if necessary.

But while Holder clearly feels a 64-bit machine was the right solution for this particular problem, he, like other IT managers, is not yet ready to jump ship on tried-and-true 32-bit technology. More to the point, however, there are still precious few applications written to take full advantage of 64-bit addressing.

Indeed, as Dan Kusnetzky, program director for operating environments and serverware services at International Data Corp., observes, “it is almost as if vendors are rushing to 64-bit only to get there before some other vendor does, not because a mass of end users demanded it.”

For his part, Holder is counting on IBM’s plans to continue to expand the capabilities of the SP2 architecture. For now, though, he’s content knowing that 64-bit can deliver substantially higher performance, even with code that is not 64-bit native. And for his “worst case-only” app, that’s still a good enough fit.

Remortgaging The Future

While Holder has chosen to limit his use of 64-bit computing, a small cadre of early adopters has taken the next step — investing in 64-bit Digital AlphaServers as a cornerstone of future operations. One of the more high-profile examples is Fannie Mae. Established in 1938 under the appellation Federal National Home Mortgage — Fannie Mae is a private corporation, federally chartered to provide financial products and services that increase the availability and affordability of housing for low-, moderate-, and middle-income Americans.

serversBy one measure, Fannie Mae has the distinction of being the largest corporation in the U.S., with $366 billion in assets and an additional $674 billion in mortgage-backed securities outstanding. The company is one of the largest issuers of debt after the U.S. Treasury and its stock is traded on the NYSE.

What makes Fannie Mae successful is, in part, its ability to accurately predict which mortgagees will be reliable debtors and which will be likely to default. “Whoever analyzes the risk best can price best,” says Eric Rosenblatt, senior credit analyst, thereby increasing the size and productivity of the loan portfolio. “On the margins you will end up much less relaxed about taking a deal when your equations are producing less specific results,” he says.

The problem is not unlike that described by Coleridge’s Ancient Mariner — there’s water everywhere but nothing to drink. Fannie Mae has more than 10 million loans, each of which comes with complex transactional records. But slogging through that sea of data takes lots of processing horsepower. The problem is aggravated by the type of processing that needs to be done, where the questions are nearly always open-ended and the answers are difficult to predict.

“The statistical work we do is an iterative process,” says Rosenblatt. “There is no closed-form solution, you just keep making estimates and getting closer to the point where you feel comfortable with the answer. We look at the relationships among factors such as the amount of down payment, the borrower’s credit history, and the likelihood of default,” he says. “In practice it is very complicated to isolate this relationship out of a huge economic pie.”

Why has Fannie Mae invested in 64-bit processing? Because in the financial industry the ability to make rapid and reliable decisions confers competitive advantage. “We want to be one step ahead of the competition,” says Rosenblatt.

Fannie Mae has come a long way. “In the past this data resided on mainframes and was not easily accessible to our economists and policy makers,” says Mary Ann Francis, director of database systems development. “We sometimes spent weeks spinning through tapes to gather information about a particular cohort of loans for research. On the AlphaServer we’re able to put as much of this data online as we want. For researchers this means faster, more flexible SAS System queries — and the ability to look across multiple databases.”

“For my own work,” adds Francis, “the most striking benefit is the speed of database administration, thanks to the parallel-processing features of Alpha and Oracle. Files that we once had to load overnight on the mainframe now load in two to three minutes during the day. Now we load 60-million-row source files into our database table in 28 minutes. It has truly changed our world.”

Fannie Mae’s search for an alternative to its mainframe focused on three key criteria: performance, availability, and affordability. “Our new systems had to support parallel processing and relational databases,” says Francis. “For the database, we did a structured competitive benchmark, and Oracle won based on company strengths and test execution. After that, Digital did a benchmark for us of Oracle on Alpha.” In some eases,” she says, “the AlphaServer produced results that were an order of magnitude faster than other platforms.”

Implementation went smoothly. For example, says Rosenblatt, more than 95% of Fannie Mac’s SAS code was migrated without change from the mainframe. “It took just three months to get the system running and integrated into our environment, our systems people trained, and the data loaded and accessible to users,” adds Francis.

Server Connections Needed

Where the rubber meets the road, in the Wild West world of the ISP, there is also a glimmer of interest in 64-bits — sometimes because of scalability issues and sometimes simply because of the need to squirt lots of data out to users.

SBC Internet Services (formerly Pacific Bell Internet), which came in “tops in overall customer care service quality” among U.S. ISPs, according to a recent benchmarking report by the Washington Competitive Group LLC, has been wrestling with runaway growth. Currently, SBC provides Internet access services to more than 300,000 business and residential customers — a customer base it wants to continue to expand at a high rate.

Ensuring continued growth and those great service marks is, in part, the responsibility of Chris Torlinsky, senior systems manager. “Our requirement is the ability to scale to millions of users,” he says. “In order to do that we will be stretching the limits of our 32-bit computers. When it comes to data-intensive applications, like a database, there is a need for 64-bits, if not right now then in the very near future.”

Currentiy, Torlinsky is waiting for his 32-bit supplier, Sun Microsystems, to provide for full 64-bit processing. “Maybe in a year we will be on 64-bit systems,” he says. Although Torlinsky says SBC does use some formal metrics for making major decisions u like a 64bit migration — he says some things are still decided on a seat-of-the-pants basis. And this may be one of them.

“We may decide very quickly that some applications have to go to 64-bits but for the foreseeable future I don’t think we will decide to move everything to 64-bit,” he says. Indeed, Torlinksy says one of the positive attributes about Sun’s offerings, which helped shut out other vendors, is that customers can continue to choose either the 32- or 64-bit option on 64-bit hardware. “With Sun, if you want compatibility with 64-bits, it’s just a matter of choosing it,” he says.

That kind of freedom — to choose 64-bits for select apps while keeping tried-and-true 32-bit apps elsewhere — seems to be what many IT managers wish they could get as they contemplate the potential perils of migration.

Clearly, 64-bit computing is making its way into the mainstream. But enthusiasm for 64-bits can be misplaced, warns Jonathan Eunice, an analyst with Nashua, N.H.-based Illuminata. “64-bits is just one of maybe a dozen major engineering variables a technical evaluator should examine as part of an enterprise server purchase,” he says. Things like I/O channel, the system bus, clustering, and operating system multithreading can be just as important.

“You need to look at deliverable performance through all the software layers,” says Eunice. And that, he adds, is best done by measuring hoped-for outcomes by testing your apps with custom benchmarks. Only then, he argues, can you understand the end-to-end performance characteristics of 64-bit when compared to more mainstream options.

32-bit or 64-bit?

A move to a 64-bit hardware and operating system platform isn’t for everyone. But for those with specific high-performance requirements it can be the answer. Here’s what to expect from 64-bit computing:

* With 64-bits you have the ability to cache much more in memory. For applications such as data warehousing that means queries will run much faster. Similarly, with OLTP applications you can have much more concurrency among users without requiring additional servers and overhead.

* 32-bit machines are generally limited to 2Gb of directly addressable memory — ample for run-of-the-mill computing but not for large mission-critical or highly memory-intensive applications such as data warehousing.

* With 64-bits there is no longer a 2Gb limit on “chunk size”–the size of a file that can be loaded.

* Although 32-bit machines can still handle most of the tasks that a 64-bit machine might tackle, they will usually require more CPUs and/or more individual servers–potentially increasing initial purchase costs, as well as ongoing support and maintenance.

Building a Virtual Organization on 64-bits

California dreaming is becoming a reality in the Secretary of State’s office, where, according to CIO David Gray, they are well on their way to creating online analogs for every function they handle. “Within five years we want all our services to be on the Internet — everything from registering to become a notary to filing corporate documents or accessing the California state archives,” he explains.

“Web-enabling our public services stands to lower our costs by reducing manpower, office rental, paper forms, and processing and filing,” says Gray.

“As an engineer I understand the technology — the larger instruction size and registers of 64-bit,” says Gray. “I can’t say we have any metrics that would quantify whether there is a big difference between being 64-bit or 32-bit. My gut feeling is that most applications wouldn’t know the difference — unless they were doing something computationally intensive.”

Part of the formula for achieving Gray’s ambitious goal has been a flight from mainframes to 64-bit Digital AlphaServers running 64-bit Digital Unix and Oracle 7 for the enterprise database, as well as 32-bit Windows NT and Microsoft SQL Server.

But in some applications, the story is different. Gray, for instance, supports a distributed database with 20 million voter registration records. It connects to 58 counties and does regular data cleansing. “It runs very well on the AlphaServers,” says Gray.

Indeed, his office is in the process of expanding the system to provide an online repository for campaign and lobbyist disclosures. “And our intent is to make that available on the Internet,” he adds.

Auto Auction Tune-Up

While corporations like federal mortgage lender Fannie Mae, with its mounds of data needing to be crunched, represent a classic vendor argument for 64-bit computing, others have found that even comparatively modest IT operations can benefit from a bit boost. Take, as an example, ADT Automotive Inc. in Nashville, Tenn., one of the nation’s top three wholesale vehicle redistribution and auction companies.

“If it weren’t for our 64-bit system we would have had to scale back our applications or distribute them over multiple computers,” says Eddy Patton, the company’s project leader for systems and development.

ADT’s LION provides online information and supports the auctioning system. It was first implemented as part of a private intranet, but is now also available to 900 autodealers around the country via the Internet. LION runs as part of an integrated 32-bit IBM AS/400 and 64-bit Digital Alpha system linking 28 auction sites and supporting up to 500 simultaneous log-ins.

Patton’s enthusiasm is undiminished despite the fact that he is currently running 32-bit NT as his operating system. Still, he says, having 64-bit hardware has provided overall performance benefits.

“Although ADT was an all-IBM and SNA shop prior to bringing in the Alpha Server, we have found that the integration of Windows NT and Alpha technology into our environment went very smoothly,” says Jim Henning, vice president of information services.

‘Gotta Have It’ – Eventually

Nicholas-Applegate, a global investment management firm with offices in San Diego, Houston, San Francisco, New York, London, Singapore, and Hong Kong, currently manages more than $31.6 billion in global assets in more than 60 countries. While nowhere near as large an operation as federal mortgage lender Fannie Mae, the company is finding that 32-bit computing has limitations.

Unix Systems Manager David Buckley says he and his shop are eagerly a waiting the availability of a full-fledged 64-bit hardware and OS platform from Sun, their long-time hardware vendor. According to Buckley, his database indexes can’t grow much longer with Sun’s current Solaris 32-bit operating system. “We really need the ability to have files bigger than 2Gb,” he says.

Buckley says he is happy to follow the forward-migration path setout by Sun. “In our part of the financial market,” he says, “most of the applications are written for Sun. We have been married to them for a while and our staff is more Sun-literate than they are on any other Unix.” Those factors, he says, kept them from even considering another vendor for the 64-bit transition.

Full 64-bit functionality, including the kernel, will be available with Sun Solaris 2.7, probably before the end of the year, Buckley says. Then, the company can begin to explore 64-bit computing in earnest, he adds.

Achieving Higher Performance

What is meant by 64-bit and what are the risks and rewards of adopting this technology? According to Informix Corp.’s Nasir Ameireh, a senior product marketing manager for Informix Dynamic Server (IDS), there are some fundamental concepts you must grasp and steps you should take when considering implementation.

For instance, he notes that 64-bit isn’t just a CPU with a largeword size or the ability to fetch more instructions per cycle. Nor is it simply a matter of data bus performance or choosing 64-bit applications or compilers. Its all of these things and more. You can’t have a 64-bit application without a 64-bit compiler and an operating system to support the results. In short you need it all.

What’s more, the true value of 64-bit systems may not appear until all of these things are present because all are interdependent.

Some, he notes, raise interesting issues: If it is possible to stuff more instructions into a single fetch, how do you handle out-of-order execution? What is the impact of cache performance? How good is the compiler at improving this? This gives rise, he says, to the need and importance of 64-bit standards like Intel IA64 Digital/Compaq, Sun, and IBM.

To take advantage of 64-bit hardware and operating systems, you need a minimum of 4Gb of memory. Why? Because 32-bit systems can address 4Gb of memory now. Moving to 64-bit hardware and operating systems with less than 4Gb memory, a complete plan for database engines, applications, and system tuning, could actually result in poorer performance, Ameireh explains.

Furthermore, he says, it is important to determine whether your applications need to be 64-bit, and you should have a plan– including a test plan — in place. If you decide to stay with 32-bit applications you should understand how they will exist in future mixed or all 64-bit environments. It is also important to understand the impact of 64-bit on third-party products which need to be “64-bit safe” or “64-bit enabled.”

“The best fit for CE today is in places where there’s a visual user interface,” said Paul Zorfass, lead embedded analyst at First Technology Inc. He cited consumer-electronics applica-tions, along with smart phones, vehicle navigation systems and designs requiring interapplication connectivity.

Paul Rosenfeld, director of marketing at Microtec, believes the “bottom line” is that “there are some apps that CE does really well now; there are some that it might do better in a year, when CE 3.0 comes along; and there are some that it’ll never do well.”

CE 3.0 will be best-positioned for broader design wins if it can meet specific technical targets, analyst Zorfass said. “Microsoft has to do two things: CE 3.0 has to have a good level of determinism, and it has to be configurable.” The latter would mean a truly modular implementation, probably including source code, which would enable customers to pick and choose the pieces of the OS they want to use.

Despite the paucity of CE-based deeply embedded apps so far, many industry players believe it’s a mistake to dismiss the OS, because Microsoft and its partners are still priming the pump.

Indeed, to get momentum behind CE, Microsoft has strung together an unusual series of partnerships and marketing arrangements in the RTOS community. For example, Integrated Systems is offering support services for CE. Yet the company competes with CE via its own pSOS RTOS.

webbbokMost vendors explain such fence-straddling approaches by stating their belief that CE won’t encroach on their bread-and-butter, deeply embedded applications. “CE 3.0 will be harder, as opposed to hard, so it addresses a bigger piece of the market,” said Microtec’s Rosenfeld. “But there’s still a big difference between where [Microtec’s] VRTX is targeted and where CE is aimed.”

At this week’s conference, however, CE will get a push that could prod it into one such arena: complex, ASIC-based embedded designs.

In that regard, Microtec will announce a deal with ARM under which Microtec will provide an integrated tool kit to enable deployment of CE on custom ARM-based embedded applications.

More important, the tools are intended to work in a hardware/software co-development environment. That’s the kind of setup engineers are likely to use with ARM, argues Microtec.

“There are some unique challenges, such as debugging, that engineers face in deploying CE on custom ASIC hardware,” said Rosenfeld. “In the traditional X86 world, you have standard boards. However, in the ARM environment, you don’t even have standard processors; you have core-based ASICs.”

Indeed, the use of application-specific processors is becoming the rule rather than the exception in the embedded world. Along with ARM, rivals include core-based offerings built around the MIPS Technologies architecture. Also in the picture are processors powered by Hitachi’s SH3 and SH4 designs, along with PowerPC and X86 implementations. On the software front, all are supported by CE.

Applied Microsystems Inc. (Redmond, Wash.), for one, is addressing deeply embedded designs based on PowerPC, X86 and SH with its new Foundation-CE development suite (see Oct. 19, page 44). The company will highlight the suite at this week’s conference.

Indeed, with so many platforms in play, the industry is realizing that serious embedded developers working in joint software/hardware environments won’t be able to get by with old-style software tools. “There’s multiple layers here,” Rosenfeld said. “If all you’re going to do is build apps, you can get away with a $500 tool. However, if you’re building your own board, then you’ve got a problem, because those tools don’t have the foundation for hardware debug.

“Looking further ahead, if you’re going to do your own ASIC, you have another level of challenge, because you need $50,000 ASIC design tools.”

While support for CE on embedded CPUs is important if the OS is to gain ground, there’s another key element in the equation. Specifically, Microsoft must improve the real-time performance of CE 3.0. Yet even Microsoft acknowledges that CE may never deliver the guaranteed, ultrafast interrupt response times-features known as determinism and low latency-that are the equal of the best of the traditional microkernels.

That is one reason Microsoft isn’t pushing CE for the toughest of the hard real-time apps but is pulling back half a step to tout the OS for many-but not all-applications. At the same time, Microsoft has expanded the discussion that’s long been used to quantify interrupt response.

Indeed, Microsoft’s Barbagallo argues that two numbers must be taken into consideration: “There’s the time it takes when an interrupt comes in. Then there’s the time it takes to respond to that interrupt.”

The time it takes for CE to acknowledge that an interrupt has come in is under 10 microseconds, Barbagallo said. “On that playing field, we’re exactly the same as all the other RTOSes. Where we in fact are slower is in the time it takes to register that interrupt if you want the OS to do something with it. In that scenario, we’re under 50 microseconds.

“In contrast, an RTOS like QNX or pSOS probably takes on the order of 10 to 15 microseconds to flip it over to the OS.”

Still, Microsoft is giving no quarter to its competitors, noting that CE can use hardware-trapped interrupts for specific cases in which immediate service is mandatory.

RTOS counterpunch

Not surprisingly, Microsoft’s rivals don’t see things that way.

With CE 3.0, Microsoft will be competing most aggressively against two specific RTOSes: VxWorks, from Wind River Systems, and QNX, from QNX Software Systems Ltd. For their part, both vendors will provide their own respective counterpunches at this week’s confab.

Wind River will come to the conference to showcase a major new release of its real-time development environment (see related story, page 80). Dubbed Tornado II, the integrated development environment comes with a new debugger and with integrated simulation capabilities to support hardware/software co-development. It’s also closely coupled to the company’s VxWorks real-time operating system.

Tornado II consists of three highly integrated components: the Tornado suite of tools, a set of cross-development tools and utilities running on both the host and the target; the VxWorks run-time system, which is billed as a high-performance, scalable real-time operating system that executes on the target processor; and a full range of communications software options.

“Tornado II provides a complete framework for our new networking and graphics environments,” said Wind River president and chief executive officer Ron Abelmann.

In addition, the company recently launched a major push into graphical user interfaces for embedded applications, with the release of eNavigator and HTMLWorks. The two GUI-development tools are built around hypertext markup language (HTML) technology.

That strategy directly addresses another debate that’s roiling the embedded arena: how to support GUIs. Previously, many resource-constrained embedded apps had displays that were rudimentary at best. Now, fancy display-based interfaces are the order of the day.

To meet developers’ needs, vendors are pursuing a couple of approaches. One tack assumes that embedded developers are better off with a fixed-and often heftier-set of GUI resources. The other approach gives developers the option of mixing and matching GUI components, the better to meet the parameters of resource-constrained designs.

For example, one of the raps against CE is that it is too bulky because it includes a full GUI (see related story, page 79). Microsoft disputes that assertion, claiming CE is completely customizable.

“If you want to build a full-up user interface that looks like Windows, then that takes up a fair bit of code, whether you build it with CE’s primitives or with Zinc primitives,” said Microsoft’s Barbagallo. “There’s no getting around that. However, if you want a smaller, specialized interface-maybe your application uses an LCD display-then you can do it with a very small amount of CE code.

“There’s fundamentally no difference there between us and our competitors; it’s just software functionality,” Barbagallo added.

To prove its flexibility, Microsoft will showcase at the embedded conference a Unisys bank-check scanner based on CE that uses a downsized user interface.

For its part, QNX Software Systems is looking to blunt CE by playing its Java card. At this week’s show, QNX will demonstrate a real-time Java application for robotics control running under its Neutrino RTOS.

“We have been extremely active in developing the real-time requirements for Java,” said a company source, noting that the demo will also be shown in the HP and IBM booths at the conference.

As the QNX demo indicates, Java in the embedded world is beginning to move from the talking stage to the implementation phase. However, as with CE, things seem to be at the early end of the pipeline.

Sun Microsystems’ EmbeddedJava spec-which has some nine licensees-is continuing its movement from paper spec into code. At the same time, HP is proceeding with the incursion into the embedded Java world that it launched this past spring.

Neither Sun’s nor HP’s Java implementations run by themselves but must be paired with a traditional RTOS. HP’s Java implementation has gained ground through license agreements with Integrated Systems, Lynx Real-Time Systems Inc., Microware and QNX. In addition, Wind River has endorsed-though it has not formally licensed-the HP technology. (Microware, QNX and Wind River are also licensees of Sun’s EmbeddedJava.)

Microsoft, too, is in the Java picture. It has licensed HP’s Java Virtual Machine and plans to integrate it into Windows CE.

Lurching forward

The CE and Java arenas share one notable similarity: Their respective proponents are attempting to build support via marketing deals. Just as Sun and HP are trying to sign up Java partners, Microsoft is lining up supporters for CE.

“Our business model for the embedded space is no different than our business model in the desktop space,” Microsoft’s Barbagallo said. That model is “the run-time revenues we receive from the OS.

“We’ve been very clear to all our partners as far as what tools we need to provide support for CE in the embedded space,” Barbagallo added. “Because they see we have this platform approach, they’re saying it makes sense to build tools for CE, because we’re not going to be competing against them.”

However, Microsoft may have to tune its long-term marketing tactics to better address embedded designers.

“The embedded market is different from the desktop market,” analyst Zorfass said. “It’s got slower adoption cycles and different requirements.