FIM 2010 R2 RC

For those that missed it or need it, the following is a list of the changes included in FIM 2010 R2 RC:  


The Software Crisis: A Brief Look at How Rework Shaped the Evolution of Software Methodolgies

Introduction Software development, for all the contributions it has made to society in terms of information availability and improved efficiency; it is a high risk venture. Reportedly, 70% of software projects either fail to achieve their full purpose or fail entirely. The reasons for this high failure rate are varied and numerous; however, they are…


What is Dynamic DNS?

Introduction   What is Dynamic DNS? The short technical answer is: “Dynamic DNS (DDNS) is an addition to the DNS standard. Dynamic DNS defines a protocol for dynamically updating a DNS server with new or changed values. Prior to DDNS, administrators needed to manually configure the records stored by DNS servers. DDNS allows this to…


Identity Management: A System Engineering Survey of Concepts and Analytical Approaches.

Abstract Identity Management is a new and emerging field where business processes and technology are combined to create identity-centric approaches to the management of users, their attributes, security privileges and authentication factors, across systems within an enterprise. (Hitachi ID Systems, Inc., 2009). The purpose of this document is to offer a systems engineering oriented examination…


A Letter to My Clients: How Computers Work

Dear Mr. Client O’Mine: Per your request to give you a high-level explanation of how computers work, above is a diagram, along with a walk through that will hopefully dispel the mystery. Let us start with the central processing unit (CPU). It is the core of any computer, and technically speaking, the components that make…


Even more about CPUs: What is CPU Caching?

Well, the slowest steps in the fetch-execute cycle are those that require accessing memory (Englander, 2007). CPU caching is a technique developed to minimize the impact that accessing memory has on the overall processing performance of a CPU. The technique involves placing a small amount (or multiple amounts) of high-speed memory between the CPU and…


CPU Parallelism: Techinques of Processor Optimization

There are two forms of parallelism that serve to improve the performance of processors: the first is Instructional Level Parallelism (ILP). ILP consist of applying the techniques of superscalar processing and pipelining to overlap as the execution of as many instructions as possible (DeMone, 2000). Superscalar and Pipelining are two ILP techniques of improving the…


More on Cores: Single Core? Dual Core? Quad Core? What’s the Difference?

The core of a processor refers its components, along with system memory, that facilitate the fetch-execute cycle by which computers read (fetch) and process (execute) the instructions of programs. Although the physical implementation of a chip depends upon its architecture, all CPUs consist of two logical components: the arithmetic/logic unit (ALU) and the control unit…


CPU Core Symmetry: Asymmetrical versus Symmetrical

The symmetry of a multi-core processor refers to whether the cores are of a homogenous or heterogeneous design. A processor with asymmetrical cores is one in which the design of the cores is heterogeneous. Typically this means that, in relation to one another, each of the cores can be designed to operate with different instruction…


Computer Memory: A Brief Survey of Technologies

Below is a (very) brief cheat-sheet of descriptions of the most commonly used memory technology and specifications today:Dual In-line Memory Modules (DIMM) — A Dual In-line Memory Module (DIMM) is actually not a type of memory; but rather, simply a number of memory components placed onto a circuit board with 240 pins which provide an…