miércoles, 12 de febrero de 2014

HCI Trends for a New Era

Experts predict that the computer, at least as we know it today, will disappear in no time. The computer will be integrated into other devices and the user will not be aware of their existence rather than by the functions offered. This seems to mean the disappearance of the explicit user interface and the development of a new implicit interface, focused on concrete tasks, more intelligent, and able to communicate with other elements. Is not about the physical machine anymore, we are getting away from the desktop, interaction styles are different. In a couple of years we might not be conscious of computers are around (Rozanski, 2010).

The field of HCI will be characterized by two trends: an evolutionary progress in dealing with current systems interactions by improving their usability, developing new methodologies and design tools that are adapted to the industrial environment; and a revolutionary trend, trying to create a new generation of interfaces that are characterized by being smarter, mobile and less visible to the user.

The evolutionary trend of HCI will work in the development of new concepts of interface usability, increasing the knowledge that we have about the user perspective and developing new methods for implementing these ideas. However, many experts believe that current development of interaction systems has reached an impasse given that most of new designs are found to be variations on the same subject (Moulton, 1998). Achieving a substantial advance in this area requires profound changes that introduce new styles of interaction, including new input/output devices or mechanisms. Until now it was expected that these changes would come from the advancement of virtual reality systems and multimedia. Nowadays, most experts are betting on ubiquitous systems, mobile computing, interfaces for natural language, etc. 

Intelligence, personality, expression, the ability to understand meaning, interactivity, and sensory richness are all essential to good interface design. Future computers should be able to sense human presence and emulate face-to-face communication. These agents, will be endowed with enough intelligence to be knowledgeable about the user's taste's, interests, acquaintances etc. (Negroponte, 1995). 

The goal of trying to break the paradigm of desktop computer is common to the works on mobile, ubiquitous and wearable computing. They claim that the services provided by computers should be as mobile as their users and should allow taking advantage of the constantly changing context in which they are used. This can lead to active environments in which these computers interact with each other and with the user in an intelligent and non-invasive mode. The philosophy of ubiquitous computing is the opposite of virtual reality. VR tries to introduce the person inside the computer. Ubiquity however, talks about computers integrated into the lives of people under the slogan the world is not a desktop (Weiser, 1994).

Wearable computing provides us with computers integrated and adapted to the user personal space. This personal space could be comprised by the users clothes, body surface and even the interior of it organism. Wearable computer extends the reach of human senses; improve their memory capabilities and increases intelligence (Ross, 2000). It should be also a gateway access between human beings and the outside world, filtering what is not relevant and serving as a protective wall of cyber attacks (Mann, 1998). Wearable computers represent a real challenge for actual HCI designers and engineers because interfaces as we know it, invades the personal spaces of the user. As in other technologies, one of the most important drivers of change is the market. For example, graphical user interfaces (GUI) opened the market for personal computer users without IT knowledge. The need to seek new markets leads to deepening the concept of usability. It is noticeable that, even now that we count with technical capabilities to meet these revolutionary concepts of interaction, companies that design hardware and software tend to be very cautious. They are still using standard interfaces due to fear of that any drastic changes may cause rejection of the user. Progress is purely cosmetic, colors, shapes, designs, but not fundamental. Designers, meanwhile, blame users. According to them, users are very conservative and cling to the systems they know, avoiding adventures with other systems, even if they promise better features (Jiang, 2000).

The great challenge is to be able to build general purpose portable computers, which accomplish with five attributes described with Steve Mann (Mann, 1998). These devices should be: PERSONAL: Human and computer are inextricably intertwined. PROSTHETIC: You can adapt to it getting the sense as a true extension of your body. CORPOREAL: It does not make the user look strange to others. PRIVATE: Others can't observe or control it unless you let them. CONSTANT: Always on, always running, always ready. Formal HCI principles for designing software and hardware interfaces are the corner-stone to accomplish these objectives. However, it stills represents a big challenge given the actual conditions of technology advancement, such as computing power, energy consumption, physical limitations, hardware volume, etc. Experts predict drastic changes for human-computer interaction. The ability to track eyes, recognize speech, and to sense touch are important ways in which future computers can be improved to better respond to the needs of the user (Rozanski, 2010). These changes have much to do with disappearance of the computer as we know it. However, predictions on new input/output devices and new styles of interaction are based on existing products; some of them are only at the prototype stage. This calls into question the premise of “total change”. Surely the changes that will occur in the next five to twenty years, if they are to be truly revolutionary, are impossible to predict based on today's standards.

References

Rozanski, Evelyn. (2010) “Lecture on Human Computer Interaction”. Gollisano College of Computing and Information Sciences. Rochester Institute of Technology. Rochester,NY.

Moulton, Dave (1998) “Optimal Character Arrangements for Ambiguous Keyboards”, IEEE Transactions on Rehabilitation Engineering, vol. 6, no. 4, pp. 415-23.

Starner, T. (2002) “Wearable Computers: No Longer Science Fiction”, Pervasive computing. 

Negroponte, Nicholas. (1995). “Being Digital”. New York, NY: Random House.

Preece, Jenny (1994). “Human Computer Interaction”. New York, NY: Wesley.

Mogridge, Bill (2006). “Designing Interactions”. Cambridge, MA: MIT Press.

Mann, Steve (1998). “WEARABLE COMPUTING as means for PERSONAL EMPOWERMENT”, Keynote Address for The First International Conference on Wearable Computing, ICWC-98, May 12-13, Fairfax, VA.

Mann, Steve (1998). “Humanistic Intelligence: `WearComp' as a new framework and application for intelligent signal processing”. Proceedings of the IEEE, Vol. 86, No. 11. Ontario, Canada.

Jiang, James (2000) “User resistance and strategies for promoting acceptance across system types” Information & Management, Volume 37, Issue 1, Pages 25-36. Amsterdam, NL.

Weiser, Mark (1994) “The World is not a Desktop”. Interactions, January 1994, pp 7-8

Ross, A. (2000) “Wearable Interfaces for Orientation and Wayfinding”, ASSETS’00, November 13-15, Arlington, VA.

Trends in Distributed Database Systems

In recent years, the availability of databases and computer networks has promoted the development of a new field known as distributed databases. A distributed database is an integrated database which is built over a computer network instead of a single computer. The distributed databases offer several advantages to designers and users of databases. Among the most important is the transparency in accessing and locating information. However, the design and management of distributed databases faces major challenge that includes problems not found in centralized databases. For example, patterns of fragmentation and finding information, managing distributed sites and consultation mechanisms for concurrency control and reliability in distributed databases. There are two forces driving the evolution of database systems. On the one hand users as part of more complex organizations have demanded a number of capabilities that have been incorporated in database systems. An example of this is the need to integrate information from various sources. On the other hand, technology has made it possible for some facilities initially imagined only in dreams come true. For example, online transaction that allows the current banking system would not have been possible without the development of communication equipment. Distributed computing systems are clear examples where organizational pressures combined with the availability of new technologies enable the realization of such applications.

In its simplest definition, distributed database systems pursue the integration of diverse and heterogeneous database systems. Its main goal is to provide the user with a global vision of the available information. This integration process does not involve the centralization of information, rather, with the help of computer networking technology available, the information is kept distributed and the systems of distributed databases allow access to it as if it were located in one place. The distribution of information allows, among other things, to have quick access to information, have copies of information for faster access and to have backup in case of failure.

Today’s enterprises must support hundreds or even thousands of applications to meet growing business demands, but this growth is dramatically driving up the cost of running and managing the databases under those applications. The stress this puts on the IT budget makes it harder to provide databases to support new requirements such as Web 2.0 applications or other emerging collaboration solutions or even to support other uses such as increased application testing (Yuhanna, 2008, p. 1).

Distributed Database As a Service

A new emerging option called database as a service (DaaS) hosts databases in the cloud and is a good alternative for some new applications. According to Forrester Research Study, some world known companies, such as Amazon, Google, IBM, Microsoft and Oracle are all targeting the DaaS market. Although most of today's DaaS solutions are very simple, in the next two to three years, more sophisticated offerings will evolve to support larger and more complex applications (Yuhanna, 2008, p. 2). 

Data outsourcing or database as a service has emerged as a new paradigm for distributed data management in which a third party service provider hosts a database and provides the associated software and hardware support.

The Database Service Provider 

This new approach on distributed database technologies allows for the apparition of a new entity named “The Database Service Provider”. Whose mission is to provide seamless mechanisms for organizations to create, store, and access their databases. Moreover, the entire responsibility of database management, i.e., database backup, administration, restoration, and database reorganization to reclaim space or to restore preferable arrangement of data, migration from one database version to the next without impacting availability will befall in such an organization

Users wishing to access data will now access it using the hardware and software at the service provider instead of their own organization’s computing infrastructure. The application would not be impacted by outages due to software, hardware and networking changes or failures at the database service provider’s site. This would alleviate the problem of purchasing, installing, maintaining and updating the software and administrating the system. Instead of doing these, the organization will only use the ready system maintained by the service provider for its database needs (Hakan, 2005, p. 5).

The Database Service Provider provides data management for its customers, and thus obviates the need for the customer to purchase expensive hardware and software, deals with software upgrades, and hires professionals for administrative and maintenance tasks. However, as wonderful as it sounds, these new capabilities on distributed systems and data management technologies leads to the introduction of new challenges related to distributed database model. Among the most important: 

1) Additional overhead of remote access to data,
2) Data privacy and security concerns, and
3) User interface design for such a service. 

Security as a Main Concern

The distributed database has all of the security concerns of a single site database plus several additional problem areas. Some security threats involve: data tampering, eavesdropping and data theft, falsifying user identity, and administering too many passwords as well as others. Security can be provided for distributed databases by providing access control, user authentication, location transparency, and view transparency (Zubi, 2010, p. 3).

With critical and sensitive amount of data being transferred across the network it is imperative that some form of security is implemented to secure the integrity and confidentiality of the system. General database security concerns must satisfy the following requirements: Physical integrity, which is the protection from data loss; Logical integrity, which is the protection of the logical structure of the database; Elemental integrity, which is ensuring accurate data; Easy Availability; Access control to some degree depending on the sensitivity of the data and user authentication to ensure that a user is who they say they are. The goal of these requirements is to guarantee that data stored in the distributed database system, is protected from unauthorized modification, and inaccurate updates (Coy, 2010, p. 269). 

Market Concerns: More Security, Optimization and Integrity.

According to the National Science Foundation Project on DaaS, conducted by Dr. Sharad Mehjotra, the following topics are considered as a high priority on the subject:

1) The integration of data encryption with database systems to protect data against outside malicious attacks and to limit the liability of the service provider. However, encryption techniques have significant performance implications on query processing in databases. 

2) The development of mathematical and statistical measures of Data Privacy for various privacy preserving schemes. 

3) Development of techniques to protect the privacy of user data from the database service providers themselves. If the service providers themselves are not trusted, the protecting the privacy of users' data is much more challenging issue. 

A service provider would need to implement sufficient security measures to guarantee data privacy. One key issue is how much privacy is enough. Any data privacy solution will have to utilize encryption which, as usual, comes with a certain cost in terms of database performance and additional hardware requirements. A fundamental question is whether encryption is too costly thus making the database service provider model infeasible (Mehtrotra, 2006, p. 11).

Another approach regarding the security strength is the optimization of the Query Process, which must able to perform efficiently over encrypted databases. New techniques changes the way we process queries over encrypted databases. Thus, optimization of these reformulated queries has to be carefully studied. The optimization process should ensure that the users of the system, the clients, can take full advantage of the capabilities promised by DaaS model (Hayes, 2008, p. 10). 

Other important element the demands attention is the database integrity. Once data encryption is employed as a solution to data privacy problem, it may generate integrity issues in this context. As a result of both malicious and non-malicious causes the integrity of the data may be compromised. When this happens, the client does not have any mechanism to detect the integrity of the original data. Therefore, new techniques have to be developed to provide clients mechanisms to check the integrity of their data hosted at the service provider side (Coy, 2010, p. 265). 

An additional issue to address in the context of encrypted databases is key management. All encryption techniques rely on secure and efficient key management architectures. DaaS model puts additional complexity on key management architectures. Therefore, they demand new techniques for generation, registration, storage, and update of encryption keys (Reavies, 2010, p. 28). 

Other emerging technologies that have evolved in some way from distributed databases are collaborative computing systems, distributed object management systems and the web. Much of the work on securing distributed databases can be applied to securing collaborative computing systems (Zubi, 2009, p. 10). 

A Market Survey for DaaS Adoption

In 2009, the Information Systems Audit and Control Association (ISACA) performed survey over 1,500 professionals across 50 countries, in order to measure the relative immaturity of DaaS over cloud computing usage and the uncertainty of the balance between risk and reward. This survey revealed that:
  • 9.4% of respondents plan to use DaaS cloud computing for mission-critical IT services.
  • 8.8 % will only use the cloud for low-risk, non-mission-critical IT services.
  • 35.6% do not plan to use the cloud for any IT services.
  • 28.2% were not aware of any plans for cloud computing.
  • 12.1% would take large risks to maximize business return.
  • 61% of reported that they believe the biggest risk to their organizations is failing to protect confidential data.
A similar study was appointed by Art Coviello, Executive Vice President of EMC Corporation. During a key note message during the RSA Conference 2010, he cited a recent survey conducted by CIO Magazine that stated 51% of IT chiefs in the USA, were unwilling to adopt DaaS or cloud computing because of security issues.

The industry needs to deliver solutions that ensure levels of protection for databases in the cloud that would surpass what physical environments are providing today. Security needs to be embedded in the virtual layer and practitioners need to shift from safeguarding the enterprise architecture to adopting a posture of information-centric protection (Coviello, 2010, p. 1).

Another survey conducted by IEEE/CSA in 2010, revealed that IT professionals are concerned and recognize the importance and urgency of DaaS security standards. 
  • 44% responded that are already involved in cloud computing projects but project that not involve corporative data stored in the cloud.
  • 93% considered the need for cloud computing security standards as important.
  • 82% percent said the need is urgent. Data privacy, security and encryption comprise the most urgent area of need for standards development. 
Distribute databases on its DaaS flavor is still a young technology. It runs on the cloud and by consuming cloud services is important to recognize the dangers and potential risks facing us, as with any new or existing IT investment. The security concerns, questions about the maturity of the supplier in an industry in its infancy, reliability, and regulatory issues are topics that are of the concern of those professional making decisions regarding the adoption of this new technology

It’s clear from the findings on the mentioned surveys, that enterprises across sectors are eager to adopt database services over cloud computing, but security standards are needed both to accelerate cloud adoption on a wide scale and to respond to regulatory drivers (Smith, 2010, p. 18).

The absence of a security compliance environment is having impact on the adoption on database services over cloud computing. Distributed database systems are a reality, and more over are here to stay. Many organizations are now deploying distributed database systems. Therefore, we have no choice but to ensure that these systems operate in a secure environment. 

There is still a long road to travel; efforts are being done. The overall issue, aside from the database itself is to ensure that the databases, operating systems, applications, network, web technologies and clients are not only secure, but are also securely integrated (Zubi, 2010, p. 9).

Works Cited

Coviello, A. 2010. Securing the Path to Virtualization and the Private Cloud from the Desktop to the Datacenter. In Proceedings of RSA 2010 Security Decoded Conference . International Conference on Computer Security. EMC-RSA, Inc. Boston, MA, 34-35.

Coy, .S. (2010) Implications of the Choice of Distributed Database Systems. In Proceedings IEEE Symposium on Research in Security and Privacy, pp. 260-272.

Hakan, .H. (2005). Providing Database as a Service. ACM Transactions on Database Systems. pp. 25-29.

Hayes, .B. (2008). The LDV Secure Relational DBMS Model. Communications of the ACM, pp. 9–11

Mehtrotra, .S. (2005). Encryption in relational database management systems. In Proc. Fourteenth 

Annual IFIP Working Conference on Database Security. pp. 105-109

Reavies, J. 2010. Regulatory requirements demand security standards compliance. In Survey By IEEE And Cloud Security Alliance Details Importance And Urgency Of Cloud Computing Security Standards. IEEE Press Release. IEEE Computer Society Press. pp. 29-30.

Smith, B. 2010. Building Confidence in the Cloud. In A Proposal for Industry and Government 

Action for Europe to Reap the Benefits of Cloud Computing. International Conference on EU Digital Market. Microsoft Press. Seattle, WA, 11-20.

Yuhanna, .N. (2008). Database-As-A-Service Explodes On The Scene. Forrester Research. 

Zubi, .S. (2010). On Distributed Database Security Aspects. . ICMCS '09 International Conference on Multimedia Computing and Systems, 2009.

Cognitive Walkthrough for the Microsoft Xbox 360 ® Gamepad Controller

The gamepad is a device with a direction controller situated on its left side and action buttons on the right. Even if, along the years it changed its shape, size and gained a few extra buttons and options, the device retains its basic form and button placement. These devices are the primary means of input for video games consoles. Gamepad Controllers allows enhancing the perception of elements residing in a non-spatial part of the gaming design space, and makes up for the broken perceptual link that occurs when a player is linked to a virtual avatar through a display and an audio system.

Among contemporary objects, Gamepads are peculiar given their existence both as physical artifacts and as interfaces to control characters in digital environments. Unlike joysticks, they correspond to a type of game controller held in the hand where fingers interact with buttons, sliders and tiny sticks. Therefore, observing this unique device enables to highlight critical implications about human-computer interaction and innovation in the field of new media: the complex relationship between controllers and video game design, the evolution of game interfaces as well as the evolution of technical objects in general.

Figure 1
The most current controller generation for the famous Xbox 360 ® is not the exception. This sophisticated controller represents an interesting evolution from the previous versions of gamepads. A lot of functions has been added: Instead of two fire buttons, the controller now has four, it retained its digital directional pad but now has two extra analog sticks, four extra buttons on the front side (Left 1 and 2 and Right 1 and 2), a guide button, charge port, ring of light, audio port, battery bay, back and start button, a vibration function and even wireless technology, as appreciated in Figure 1. It is important to know, that Microsoft target audiences for this device are male, hard core gamers from +17 years old.

Sticks and directional pad interactions correspond to spatial movement and/or directionality; buttons correspond to actions the user can take. They tend to be fairly simple; limiting the user’s actions to a few well understood options (though complexity can increase with the number of buttons and context-sensitive buttons). They are efficient, as the user can rapidly and repetitively enter game commands with the same muscle movements. Such interfaces have proven to be familiar and comfortable after having been the standard for so long.

Figure 2
The Controller provides a good conceptual model of its use. The quality of the materials used in the controller, from the enclosure to the different buttons and sticks, are first rate and provides a solid feeling. The curvature given to the top surface plays an important role on the way the face button feels, how everything on the controller’s surface seems handy and easy to reach and at the same time creates a constrain about how the controller should be manipulated. The user can easily interpret that the controller must be grabbed with both hands. Once the controller is grabbed, the natural position of the thumbs will correspond almost exactly with the actual location of the Sticks and Control Buttons. Ergonomics for the triggers buttons perfectly accommodates the index and middle fingers; giving the user the sense of the device as a natural extension, as appreciated on Figure 2. The controller is very comfortable, light and the layout is quite well done. It can be used while standing, it is easy to learn and the presence of two analog stick, several buttons offer a great variety of input possibilities. As of now, though, the majority of virtual environments using the Xbox 360 ® Gamepad as an interaction device allows users to navigate freely, look around and offers a very good degree of interactivity.


In regards to the possible actions that the Xbox 360 ® controller provides and suggest to the user, we can find a quiet good set of affordances: spatial movement, fast triggering, double triggering, multiaction points, up to 6 simultaneous command combinations, among many others.

In terms of usability, regular gamers tend to adapt naturally to this new version of Controller. When the controls are too difficult to learn or use or if they detract from the gaming experience, users will decide not to play the game (Smith, 2006). This effect is sometimes seen in new players or casual users when they have to fight the controls more than the game mechanic and gives up in frustration. However, this symptom is not exclusive for the Xbox 360 ® Controller, but for all video games controllers or interfaces known so far.

A factor that is easily conveyed by users first iteration in modern video games, is the loss of sense of direction that occurs when a player navigates a virtual space with his/hers real-world navigation skills, while being perceptually linked to this world through a display, audio system and a haptic controller interface. The Xbox 360 ® overcomes this phenomenon by allocating the gamepad controller functions according to the human reality standard references. For example, when people play a game, the game pad faces to the ceiling. Hence, it is reasonable to use and easy to deduce that the “up” and “down” keys of the “Cross pad” allows performing inward and outward movements.

Control components are visible and are layout in a way that it is clear to user what actions are possible and what the appropriate way to perform those actions is: move, press, push down, pressure, etc. The Xbox 360 ® Controller provides means of easy identification of interaction elements, so the user is able to perform intuitively. In the other hand, the controller layout provides overall flow needed to accomplish end goal, therefore golfs of execution are easy to map. In terms of gulfs of evaluation, this will be addressed when evaluating the feedback channels provided by the controller. These characteristics go in hand with the Principle of “Knowledge in the World”. This concept refers to information that exists in the world that we don’t need to memorize to utilize. This is evident by the fact that game players do not memorize the button combination. They play intuitively without thinking, watching or calculating their actions over the controller. 

Figure 3
Another interesting characteristic is the presence of signifiers. A signifier, as defined by Norman, is some sort of indicator or signal in the physical or social world that can be interpreted meaningfully (Norman, 2008). These are present in the Xbox Controller via four independent quadrants that lights up according to what controller number must be pressed or hold, as shown in Figure 3. Allowing the user to easily interpret what button and in which direction to press in order to complete the option explained on the screen.


In relation to the feedback mechanisms, the Xbox 360 ® Controller allows for novel way to provide the user with information hard to convey in a subtle visual manner. The Xbox 360 ® controller has two vibration motors that enable it to give haptic feedback in stereo by varying the output of the different motors. For example: the controller vibrates when the player fires his or hers weapon, or when the player is being hit by enemy fire, when the player is near from the edge of a building, falling from high altitude, etc. According the ISO 9241-9 standard about ergonomics requirements for non-keyboard input devices, the Xbox 360 ® Controller complies with the usability aspects shown in Table 1. In other hand, regarding targeting tasks that involves on screen point selection, again the layout and shape of the interaction points allows for an easy mental mapping. Targeting is a common task in video games, and the action can be interpreted in a variety of ways, ranging from shooting on-screen enemies to selecting on-screen menu options.
Usability Aspect
Rating
Force required for actuation
Low
Smoothness during operation
Smooth
Mental effort required for operation
Moderate
Accurate pointing
Moderate
Operation speed
Very Fast
Finger fatigue
Low
Wrist fatigue
None
General Comfort
Confortable

Table 1. Xbox 360 ® ISO 9241-9
The traditional controller scheme locates the analogue stick right below the thumbs, this can interpreted as a forcing function to suggest that one stick could be used for navigation and a second analogue stick for targeting. Normal input generally involves moving/pressing stick or pushing buttons in a certain sequence, or at a certain time, to perform game actions. In regards to the Norman’s 7 Stages of Actions, the Xbox 360 controllers allows for multiple actions and functions. For the purpose of this analysis we will consider a simple interaction with a games setting menu for effects volume control. Having this said, we can determine the following outcome:

Action
Stage
Outcome
Activity
Forming the Goal
The user presses the Menu button. The Volume Settings menu offers an option to lower the effects volume.




Interaction

Forming the Intention
There's a control labeled “Effects”. The user understands that he can use the gamepad controller to manipulate it in some way to reduce the sound effect volume.

Specifying the Action
“Effects” control looks like a little handle on a horizontal track. The user understands that if he could grab the handle and drag it along the track, the volume will change. The user maps the controller sticks with the available screen functions. The controller stick allows for left/down and right/up movement.
Executing the Action
The user decides to move the Stick to the left.



Information
Perceiving the State of the World
As the user moves the stick to the left, the handle on the screen moves towards the left. When the user releases the stick the game plays a sample sound effect.
Interpreting the State of the World
Now the handle is closer to the icon of the smaller speaker and the sample sound seemed to be quieter than the one that was heard during the game, before moving the handle with the controller stick.
Evaluating the Outcome
The user wanted to turn down the sound effects volume. He continues playing and notice that the volume of sound effects is lower.
Table 2. Norman 7 Stages of Action Analysis

A typical user would move through these stages subconsciously in a couple of seconds, but if he isn't able to smoothly move through each and every one then it can lead to confusion or frustration. The largest hurdle to involvement is the user interface, or how a player interacts with the game. Analyzing usability and adhering to accessibility design principles makes it both possible and practical to develop fun and engaging game user interfaces that a broader range of the population can play. The success for a normal execution will be an appropriate coordination between the controller and the video interface, which as we know is out of the scope of this analysis. 

Finally, the Xbox 360 ® Controller implements a mixture of interlocking and lockout mechanisms for error control and management. The user is unable to perform forbidden operations like playing the game without calibrating the control, navigating away from the screen, etc. Again the feedback channels via the vibration function and the visual clues on the light ring are used to provide memory aid and guidance in case of wrong usage. Normally this error management functions are reinforced by on screen visual clues.

In conclusion, the Xbox 360 ® Controller is an overall very well designed artifact. Its adoption has been considerable having more than 40 million users, with over 70% rate of 5 points of satisfaction (Amazon Reviews, 2010). It is likely that many of the upcoming innovations in gaming will come from new developments in user interface technology. Direct manipulation of user interfaces, context sensitivity, and cross platform titles will all result in new ways to play. Combining these technologies can lead to a new generation of games and experiences that are fun and exciting for a broad user base. If thoughtfully conceived these technologies can lead to added usability and accessibility for systems as well.

Recommendations
The proximity between both triggers and the respective bumper buttons (See Figure 1). These elements are close in distance and aligned at the same depth; both pairs tend to be confused between each other, resulting in a wrong button press. I would suggest giving special shape to these buttons.


Analogue stick targeting is not a good method for performing point-select tasks. Despite this, the method is used in many games that use traditional controllers (Castellucci, 2008). It would be a good idea for future generations of consoles to introduce new controller schemes which would make targeting and point-selection easier (i.e. Wii mote)

Sometimes it is unclear how long to press a specific button. Indicating the estimated pressing time via visual clue would help to fix this issue.

References

Norman, D. (1990). The Design of Everyday Things. New York, NY. Currency and Doubleday.
Eysenck, M. (2007). Fundamentals of Cognition. New York, NY. Psychology Press.
Norman, D. (2008) . Signifiers, Not Affordances. Interactions Archive. XV. Nov., pp. 42-44.

Smith, J. David. (2006). Use of eye movements for video game control. ACE '06: Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology, 20.
Hill, W., Hollan, J. D., Wroblewski. (1992). Edit Wear and Read Wear: Text and Hypertext. Proceedings of the 1992 ACM Conference on Human Factors in Computing Systems (CHI '92).
Castellucci, S. J. and MacKenzie. (2008). Text entry using three degrees of motion, Extended Abstracts of the ACM Conference on Human Factors in Computing Systems - CHI 2008., 3549-3554.
Silfverberg, M., MacKenzie. (2001). An isometric joystick as a pointing device for hand-held information terminals, Proceedings of Graphics Interface 2001. Information Processing Society, 2001, 119-126.
Amazon Reviews. Customers Review for Xbox 360. Web. October 7, 2010.[Online] http://www.amazon.com/review/product/B000UQAUWW/ref=dp_top_cm_cr_acr_txt?%5Fencoding=UTF8&showViewpoints=1



martes, 11 de febrero de 2014

Scada Live Forensics: Real time data acquisition process to detect, prevent or evaluate critical situations

SCADA (Supervisory Control and Data Acquisition System) systems were originally created to be deployed in non-networked environments. Therefore they lack of adequate security against Internet-based threats and cyber-related forensics. In recent years, SCADA systems have undergone a series of changes that might increase the risks to which they are exposed. Among these risks it can be observed that its increased connectivity may permit remote controls over the Internet, or the incorporation of general purpose tools, thus incorporating already known vulnerabilities of these. Any cyber-attack against SCADA systems demands forensic investigation to understand the cause and effects of the intrusion or disruption on such systems. However, a SCADA system has a critical requirement of being continuously operational and therefore a forensic investigator cannot turn off the SCADA system for data acquisition and analysis. This paper leads to the creation of a high level software application capable of detecting critical situations like abnormal changes of sensor reads, illegal penetrations, failures, physical memory content and abnormal traffic over the communication channel. One of the main challenges is to achieve the development of a tool that has minimal impact over the SCADA resources, during the data acquisition process.

The security of SCADA systems is especially relevant in the field of Critical Infrastructure. A failure of critical infrastructure could have direct impact for society to the extent of affecting entire nations and their environment. 

Any government network infrastructure or industrial based SCADA (Supervisory Control and Data Acquisition) or DCS (Distributed Control Systems), designed to automate, monitor and control critical physical processes, including manufacturing and testing, electric transmission, fuel and water transport, is subject to potential attacks. 

Supervisory Control and Data Acquisition comprise all application solutions that collect measurements and operational data from locally and remotely controlled equipment. The data is processed to determine if the values ​​are within tolerance levels and, if necessary, take corrective action to maintain stability and control. Its basic architecture comprises a centralized server or server farm, RTU (Remote Terminal Units) or PLC (Programmable Logic Controller) to manage devices; consoles from which operators monitor and control equipment and machinery.

SCADA systems were originally created to be deployed in non-networked environments. Therefore they lack of adequate security against Internet-based threats and cyber-related forensics.

Most industrial plants now employ networked process historian servers for storing process data and other possible business and process interfaces. The adoption of Ethernet and transmission control protocol/ Internet protocol TCP/IP for process control networks and wireless technologies such as IEEE 802.x and Bluetooth has further reduced the isolation of SCADA networks (Zhu, Anthony & Sastry, 2011). 

In recent years, SCADA systems have undergone a series of changes that might increase the risks to which they are exposed. Among these risks it can be observed that its increased connectivity may permit remote controls over the Internet, or the incorporation of general purpose tools, thus incorporating already known vulnerabilities of these.

SCADA systems, in particular, perform vital functions in national critical infrastructures, such as electric power distribution, oil and natural gas distribution, water and waste-water treatment, and transportation systems. They are also at the core of health-care devices, weapons systems, and transportation management. The disruption of these control systems could have a significant impact on public health, safety and lead to large economic losses (Cardenas, Amin, Huang, Lin & Sastry 2011). 

As a consequence, there is an increasing interest in the security/forensic research community on SCADA systems. This is mostly due to the heightened focus of governments worldwide on protecting their critical infrastructures, including SCADA systems (Ahmed, Obermeier & Naedele, David, Chaugule & Campbell, 2012). 

Securing SCADA systems is a critical aspect of Smartgrid security. As sophisticated attacks continue to target industrial systems, the focus should be on planning and developing new security techniques that will adapt to the SCADA environment and protocols (Rodrigues, Best & Pendse, 2011).

Immediate needs identified in this area include the collection of evidence in the absence of persistent memory, hardware-based capture devices for control systems network audit trails, honeypots for control systems as part of the investigatory process, radio frequency forensics and intrusion detection systems for SCADA control systems (Nance, Hay & Bishop, 2009). However, post-mortem analysis tools require the investigator to shut down the system to inspect the contents of disks and identify artifacts of interest. This process breaks network connections and unmounts encrypted disks causing significant loss of potential evidence and possible disruption of critical systems (Chan & Venkataraman, 2010). 

Computer forensics relies on log events for searching evidence of a security incident. However, the massive amounts of generated events along a lack of standardize logs complicate the analyst tasks (Herrerias & Gomez, 2007).

Digital forensics investigators are experiencing an increase in both the number and complexity of cases that require their attention. Most current digital forensic tools are designed to run on a single workstation, with the investigator issuing queries against copies of the acquired data evidence. With current generation tools, the single workstation models works reasonably well and allows tolerable case turnaround times for small forensic targets (for example < 40GB). For much larger targets, these tools are too slow to provide acceptable turnaround times (Richard & Roussev, 2006).

The challenge, however, is to mitigate the vulnerabilities that occur once a networked device becomes accessible from the internet. Attacks ranging from DDoS to backdoor intrusion are possible on industrial networks and power and SCADA systems. Although network firewalls can stop a significant amount of malicious traffic, there are several techniques hackers can use to bypass these security devices. The complexity of the infrastructure can make it difficult to detect malicious behavior (Rodrigues et al., 2011).

Any cyber-attack against SCADA systems demands forensic investigation to understand the cause and effects of the intrusion or disruption on such systems. However, a SCADA system has a critical requirement of being continuously operational and therefore a forensic investigator cannot turn off the SCADA system for data acquisition and analysis. Current forensic tools are limited by their inability to preserve the hardware and software state of a system during investigation. 

Process control systems (SCADA Systems) generated much discussion as an area that the security community recognizes as a security threat, but not yet perceived by industry to be as much of a threat. As a result, this field lags behind most technical fields in the area of security (Nance, Hay & Bishop, 2009). 

Study and research security vulnerabilities related to networked Supervisory Control and Data Acquisition (SCADA) systems, in order to develop a forensic computing model to support incident response and digital evidence collection process. Forensic investigation can play a vital role in a protection strategy for SCADA systems and may assist in the prosecution of attackers, but also in a deep analysis of the underlying SCADA IT system, for example, in the case of non-malicious events such as malfunctioning hard disks or other hardware. However the critical nature of SCADA systems and the 24/7 availability requirement entails forensic investigators spending as little time on a live SCADA system as possible, necessarily performing live data acquisition and then subsequent offline analysis of the acquired data (Ahmed et al. 2010). 
Relevance and Significance 

In the last years there has been an increasing interest in the security of process control and SCADA systems. Furthermore, recent computer attacks such as the Stuxnet worm, have shown there are parties with the motivation and resources to effectively attack control systems (Cardenas et al., 2011) 

SCADA systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks (Zhu & Anthony, 2011). 

In recent times it has been noticed that hackers implement newer techniques to launch attacks that can evade traditional security devices. It is therefore important to secure the SCADA systems from process related threats (Rodrigues et al., 2011). 

Compromising such a system with intrusion attacks can lead not only to high financial loses but, more importantly, to the endangerment of public safety. The danger is even higher considering that critical infrastructures are not immune to these threats and that they may be potentially more vulnerable than common information technology systems. Hence intrusion protection for critical infrastructures is an obvious need (Linda, Vollmer & Manic, 2009). 

Reliability of many SCADA systems is not only dependent on safety, but also on security. Recent attacks against SCADA systems, by sophisticated malware, demands forensic investigation to understand the cause and effects of the intrusion on such systems so that their cyber defense can be improved. 

A SCADA system has a critical requirement of being continuously operational and therefore a forensic investigator cannot turn off the SCADA system for data acquisition and analysis. In this case, live forensics is a viable solution for digital investigation in SCADA systems (Ahmed et al., 2012). 

In real life, logs are rarely processed by stakeholders due to 1) the large number of entries generated daily by systems and 2) a general lack of security skills and resources (time) (Hadziosmanovic et al.,2011). However, the use of the classical post-mortem analysis approach is becoming problematic especially for large-scale investigations involving a network of computers. In addition, the amount of time available for processing this data is often limited (Su & Wang, 2011). 

A substantial body of research exists in the area of forensics models for live acquisition over SCADA systems. Related research work is discussed on this section. 

There is a growing need for systems that allow not only the detection of complex attacks, but after the fact understanding of what happened (Tang & Daniels, 2012). 

Several researches address threats in SCADA systems. For the identification of threats, authors typically use questionnaires and interviews. To detect anomalous behavior, authors use approaches based on inspecting network traffic, validating protocol specifications and analyzing data readings. Process-related attacks typically cannot be detected by observing network traffic or protocol specifications in the system. To detect such attacks one needs to analyze data passing through the system, and include a semantic understanding of user actions (Hadziosmanovic et al.,2011).

A group of researchers who met at the Colloquium for Information Systems Security Education (CISSE 2008) to brainstorm ideas for the development of a research for Digital Forensic, concluded that actual SCADA systems are potentially more vulnerable to attack and more likely to need associated digital forensics capabilities. Unfortunately, most process control systems were not built to track their processes, but merely to control them. As a result, significant research and development categories were identified under this area, including among the most important: mechanism form the collection of evidence in the absence of persistent Memory and hardware-based capture devices for control (Nance, 2009). Figure 1 shows the list of topics in need for further development. It can be noticed that areas for Live Acquisition and Control Systems.



Chen & Abu-Nimeh (2011), developed a deep research over the case of Suxnet malware. According their report, this was the first malware written exclusively to attack SCADA platform. 

Stuxnet experience has shown that isolation from the Internet isn't an effective defense, and an extremely motivated attacker might have an unexpected combination of inside knowledge, advanced skills, and vast resources. Existing technologies would have difficulty defending against this caliber of attack (Chen & Abu-Nimeh, 2011). Therefore the need of new forensics methods that goes beyond the traditional prevention mechanism.

Ahmed, Obermeier & Naedele, David, Chaugule & Campbell (2012), propose a forensic mechanism denominated Live Forensics as a viable solution for SCADA systems. Live data acquisition involves acquiring both volatile data (such as the contents of physical memory) and non-volatile data (such as data stored on a hard disk). It is different from traditional dead disk acquisition, which involves bringing the system offline before the acquisition, where all volatile data is lost.

However, despite the importance of live data acquisition, it is still unclear how contemporary live data acquisition tools should be run on a SCADA system so that they minimize risk to SCADA system services (Ahmed et al., 2011).

Aldenstein (2006) established that the possibility of implementing live forensics over SCADA systems relies on the capability of the operating system to provide the list of running processes. Therefore, he recognized the need for tools capable of examining the raw memory of a machine. These tools are analogous to the static tools that open the raw disk device and impose the file system structure on it to extract files, directories, and metadata (Adelstein, 2006).

Sutherland et al (2008), performed exploratory studies for live forensics within Windows operating systems environment and also determined the need for more invasive tools that allows better access to information related to memory, network and system activity were assessed to determine the impact on the file system, system registry, memory and the usage of DLLs.

Hadziosmanovic et al. (2011) proposed a tool-assisted approach to address process related threats. They presented an experimental study where SCADA threats that unlikely to happen or that does not occur on a systematic manner are detected and logged for investigation. An example could be when an attacker manages to get valid user credentials and performs disruptive actions against the process. However this effort was limited to post-mortem log analysis containing data for single event operations and does cover anomalous command process sequences. Likewise, it was determined that an attacker might gain unauthenticated remote access to devices and change their data set points. This can cause devices to fail at a very low threshold value or an alarm not to go off when it should. Another possibility is that the attacker, after gaining unauthenticated access, could change the operator display values so that when an alarm actually goes off, the human operator is unaware of it. This could delay the human response to an emergency which might adversely affect the safety of people in the vicinity of the plant (Zhu & Anthony, 2011).

SCADA systems are increasingly commonly being attached to networks, and typically offer no persistent storage for logging of network activity. The challenge for the digital forensic research community is to develop methods to allow an investigator to determine how these devices interacted with the network during a time period of interest (Nance, Hay & Bishop, 2009).

There is continuing interest in researching generic security architectures and strategies for managing SCADA and process control systems. Documentation from various countries on IT security does now begin to recommendations for security controls for (federal) information systems which include connected process control systems. Little or no work exists in the public domain which takes a big picture approach to the issue of developing a generic or generalizable approach to SCADA and process control system forensics (Sly & Stinikova, 2009).

Collection of adequate records or logs of events that happened near incident time is crucial for successful investigation. Logging capabilities of SCADA systems are geared towards discovering and diagnosing process disturbances, not security incidents, and are thus often not adequate for forensic investigation (Fabro & Cornelius, 2008)

Kilpatrick et al (2008) developed an architecture based on the Modbus TCP (Transmission Control Protocol) using two control devices and one HMI (Human Machine Interface) station. This architecture comprised two agents and a central warehouse. Various agents were deployed over the SCADA network. These agents captured network traffic containing real time data and stored them into the warehouse. Relational databases query mechanisms were used in the event of a forensic investigation. However, Ahmed et al (2011) established that state of the art forensic analysis tools do not support the unique features of diverse SCADA environments, which include supporting SCADA protocols and numerous SCADA applications’ proprietary log formats etc. Thus plugins or modules for contemporary forensic tools need to be developed to augment the forensic analysis in SCADA systems.

Nehimbe & Nehibe (2012) proposed a timed series methodology to analyze forensic logs. During their research they concluded that actual for forensic tools may not necessarily generate the needed results. Due to two basic limitations on these tools: Some of them only have recovery and imaging capabilities and some intrusion analysis tools are flawed in terms of how they analyze intrusion logs.

Hunt & Slay (2010) proposed an approach named security information event management (SIEM) with the purpose to provide a tool that allows any networked system to auto adapt itself based on forensic logging. Their works showed that a SIEM system is an ideal point at which to store log data emanating from security devices and the point at which forensic logging needs to occur. However, although they were able to achieve the implementation of forensically sound log files in some systems; their approach is by no means universal. They concluded that their works still falls short of addressing the core domain of real-time forensically sound adaptive security.

With the purpose of rebuilding an attack scenario Herreria & Gomez (2007), proposed a log correlation model to support the evidence search process in a forensic investigation. In this work, they proposed a system composed by a set of agents in order to collect, filter, and to normalize events coming from diverse log files. Events may come from systems logs, application logs, and security logs. Once events are joined together in the same place and under the same format, they are sent to a correlation engine. The engine compares and processes the events in a global fashion in order to follow all actions taken by the attacker (Herrerias & Gomez, 2007).

Su & Wang (2011) developed a formula using probability theory and mathematical statistics to quantitatively calculate the degree of memory change on a live system. Their conclusions states that since the live memory state frequently changes is natural limitation for the purpose of live forensics. In their experiments they tried to restore to the same system state each time, however, the real state has been changed after one or two seconds. Therefore, they were only able determine and approximate of the system memory in every repeated process.

Further work is required to assess tools over various operating systems. This would be of value to the forensic investigator, but the way memory is handled and its analysis varies greatly between Windows Service packs let alone other operating systems; as a result the area of memory forensics is deeply complex and requires a significant amount of time and effort invested by the forensic examiner to begin to comprehend how memory works in modern Operating Systems (Sutherland, Evans, Tryfonas & Blyth, 2008).

Other research approaches are related to autonomic attack detection and response. Cardenas et al. (2011) showed that by incorporating a physical model of the system they were able to identify the most critical sensors and attacks. They also proposed the use of automatic response mechanisms based on estimates of the state of the system. However, they concluded that this methodology might be problematic, especially when the response to a false alarm is costly (Which could be the case for SCADA environment). As a result their model should be considered as a temporary solution before a human investigates the alarm.

While there have been a good number of research efforts investigating the suitability of forensics mechanism for SCADA, this work would be different in that it is intended to develop an investigation into what is required to develop a forensic computing model to support incident response and digital evidence collection process, without interfering with the “always running” condition of SCADA platforms. The intended approach is an extension of the works from Ahmed et al. (2011), who proposed a forensic mechanism denominated Live Forensics as a viable solution for SCADA systems. Live data acquisition involves acquiring both volatile data (such as the contents of physical memory) and non-volatile data (such as data stored on a hard disk). It is different from traditional dead disk acquisition, which involves bringing the system offline before the acquisition, where all volatile data is lost. 

From a forensic perspective, a SCADA system can be viewed in different layers based on the connectivity of the various SCADA components and their network connectivity with other networks such as the Internet. 



The lowest layer represents the physical elements designed to interact directly with the industrial hardware or machinery. These devices are connected via bus network. Layer 1 receives electrical input signals with are decoded as a bit streams over a standard network protocols. The result is transferred to the uppers layers form analysis and controlling response. Layer 3 and above, represents the enterprise network which is also interconnected to the Supervisory systems. At this stage all traffic containing database content and applications supporting the business logics for the operation is managed. As stated by Amehd et al, (2010), live forensic analysis for the SCADA system must focus on the Layers 0, 1 and 2. 

The initial approach has the intention of developing a forensic watch dog by means of a finite state automaton that would function as an agent that is constantly listening SCADA events. When any particular event is sensed, the input values are read and compared to a set of predefined rules in order to decide the change of state. Figure 2 represents the proposed automaton model.


This two state automaton or agent constantly monitors the state of the SCADA system, including measures from: System Variables, Sensor Tags, Network Traffic and Command Executions. These values are checked against a set of behavioral rules. If a read is detected to be above the normal range, the agent automatically switches to Forensic mode and initiate the logging of forensic information. Ideally it would need a separate backup system to continuously dump the abnormal lectures from the SCADA tags and creates a record of this event appending all the information available about the system state. Including, but not limited to: CPU load, sensor names, sensor values, state of the physical memory, state of the virtual memory, state of the networking variables, state of mounted disk and network drives, list of active process in memory including name, executable name, working directory, command line, user name, user ids and group ids, threads, connections, file descriptors, etc. The information logged would be exclusively related to the period of time of the anomaly. Once the system start reading normal values, it switches back to Forensic Monitor and stop logging. 

Normally a SCADA system reads every sensor or control registry on the system. These registries are known as tags and the logging frequency can vary from a read every second or even every 300 milliseconds. A typical SCADA system can have up to 40,000 tags. A system of such magnitude can generate approximately 400GB of data for every 24 hour period of operation. This calculation is based estimating an average size of 120 bytes per record. It can be seen, that the live acquisition is just part of the challenge. Dealing with vast amounts of data needs to be considered. These volumes requires manipulation by means of database query processors and moreover, require a fast capture and writing process that must be able to: (1) keep up with the logging process at the same time that new data comes into the system; and (2) all this must be done without incorporating additional workload over the monitored SCADA system. In other words, needs to be accomplished in a non-invasive manner.

Another challenge imposed to the intended solution is that SCADA system components can be found running on legacy hardware and operating systems. In such cases, a SCADA system provides limited system resources for data acquisition and therefore demands lightweight data acquisition tools and the gathering process might not represent a large resource consumer. However the data conversion process, if a relational database would be implemented for a better data analysis, would requires specialized hardware to reduce the processing time and speed up files conversion and querying processes.

Another important aspect from this experiment that would be suitable for further development is that current forensic analysis tools do not provide a standard support for the variety of SCADA hardware versions, protocols and log formats. Therefore, we have an interesting opportunity to expand this experiment with the development of plugins and applications and interface layers in order to increase the number of SCADA forensic tools as an expansion of the works of Hadziosmanovic et al., (2011), who stated that despite the fact that there are several vendors, system architectures in various SCADA systems are similar and the terminology is interchangeable.

In conclusion, future work leads to the creation of a high level software application capable of detecting critical situations like abnormal changes of sensor reads, illegal penetrations, failures, physical memory content and abnormal traffic over the communication channel. One of the main challenges is to achieve the development of a tool that has minimal impact over the SCADA resources, during the data acquisition process. In previous exercises it was observed that the processes for acquiring low level information, such as processes or memory information does not represents an extensive load on the actual system that is processing the task. However, it is expected that the amount of demanded resources increase as the number of SCADA tags and the frequency of logging increases. Therefore on real live SCADA system, the acquisition process could be competing for resources that should be available for the normal operation of the SCADA systems.
Barriers and Resources

From the literature review it can be determined multiple barriers and issues that could be anticipated for the development of this work. The next section presents a summary of the know limitations and challenges determined by previous research efforts on this field.

Forensic data gathered from a live system can provide evidence that is not available in a static disk image. Live forensics also operates with different constraints—specifically, the evidence gathered represents a snapshot of a dynamic system that cannot be reproduced at a later date. Standards for acceptance are evolving, and legal precedents are still being established (Adelstein, 2006). 

Given that volatile data in a running system changes continuously, Ahmed et al., (2011) established two main challenges involved during live data acquisition over operational SCADA systems: (1) Live data acquisition needs to be performed as quickly as possible after an incident in order to capture any traces of the incident on volatile data before the processes or services on the running system overwrite useful data; (2) Cryptographic hash of the actual evidence on the compromised system and its acquired copy, which is used for all the examination and analysis. If, however, the compromised system remains live, the state of the data may change between the copying and the hash calculation, rendering hashing ineffective as an integrity check (Ahmed et al., 2011).

From a forensic standpoint, modifying the original system memory state is unavoidable, therefore, it is needed that changing as little as possible on the process of collecting live forensics. For a real live forensics case, it should content digital forensics that collected by the forensic tools, the analysis and evaluation of the uncertainty. However, it is difficult to measure how much of the volatile memory is modified by a forensics tool. Moreover, it is difficult (if not impossible) to calculate the extent of the memory’s impact caused by a running process on the volatile memory. So, measuring the extent of the volatile memory changes caused by running a live forensic tool becomes more and more important (Su & Wan, 2011).

Because the architecture of production operating systems prevents applications from accessing kernel memory and storage devices without using the kernel, kernel-based rootkits will always be a threat to live analysis. Future directions in live analysis techniques involve the use of specialized hardware to collect the raw memory and storage data for a dead analysis (Carrier, 2006).

Research has shown that an attacker with control of the target system can manipulate memory mappings so that the CPU and devices on the PCI or Firewire buses don’t necessarily get the same view of memory. In such cases, attempts to acquire the memory’s contents could crash the target system or enable the attacker to mask sections of memory without that masking being apparent to the investigator (Hay, Bishop & Nance, 2009).

Furthermore, factors like the continuous availability demand, time-criticality, constrained computation resources on edge devices, large physical base, wide interface between digital and analog signals, social acceptance including cost effectiveness and user reluctance to change, legacy issues and so on make SCADA system a peculiar security engineering task (Zhu & Anthony, 2011).

Finally, no matter how well the simulations and models emulate a possible solution, any given conclusion needs to be tested over a real SCADA systems. Real SCADA systems are expensive to build and thus require significant research funding. Access to sample information or security failure scenarios could be difficult because the critical nature of SCADA systems demands the owners and operators not share any information about their system.

Applying traditional information security mechanism directly to SCADA systems is not possible. SCADA systems cannot afford non-deterministic delays in performance, security controls that require a lot of memory, block access for safety or relatively long intermediate processes. Security measures that can be applied to SCADA systems should consider this special operating paradigm.

Process-related attacks typically cannot be detected by observing network traffic or protocol specifications in the system. To detect such attacks one needs to analyze data passing through the system, and include a semantic understanding of user actions (Hadziosmanovic et al., 2011).

In conclusion, there is no generic model for understanding the forensic computing processes necessary to gather digital evidence from Process Control and SCADA systems. Therefore, the need for developing a forensic computing model to support incident response and digital evidence collection process is justified.

As a consequence this work could help to improve critical infrastructure protection and provide appropriate tools that could be used for dealing with incident responses and forensics analysis over interconnected SCADA systems. 

References:

Adelstein, F. (2006). Live forensics: diagnosing your system without killing it first. Commun. ACM, 49(2), 63–66. 

Ahmed, I., Obermeier, S., Naedele, M., & III, G. R. (5555). SCADA systems: Challenges for forensic investigators. Computer, 99(1), 1.

Andersson, G., Esfahani, P. M., Vrakopoulou, M., Margellos, K., Lygeros, J., Teixeira, A., … Johansson, K. H. (2012). Cyber-security of SCADA systems. In Innovative Smart Grid Technologies, IEEE PES (Vol. 0, pp. 1–2). Los Alamitos, CA, USA: IEEE Computer Society. 

Boca, L., Croitoru, B., & Risteiu, M. (2010). Monitoring approach of Supervisory Control and Data Acquisition downloadable data files for mission critical situations detection. In International Conference on Automation, Quality and Testing, Robotics (Vol. 3, pp. 1–6). Los Alamitos, CA, USA: IEEE Computer Society. 

Cárdenas, A. A., Amin, S., Lin, Z.-S., Huang, Y.-L., Huang, C.-Y., & Sastry, S. (2011). Attacks against process control systems: risk assessment, detection, and response. In Proceedings of the 6th ACM Symposium on Information, Computer and Communications Security (pp. 355–366). New York, NY, USA: ACM. 

Carrier, B. D. (2006). Risks of live digital forensic analysis. Commun. ACM, 49(2), 56–61. 

Chan, E., Venkataraman, S., David, F., Chaugule, A., & Campbell, R. (2010). Forenscope: a framework for live forensics. In Proceedings of the 26th Annual Computer Security Applications Conference (pp. 307–316). New York, NY, USA: ACM. 

Chen, T. M., & Abu-Nimeh, S. (2011). Lessons from Stuxnet. Computer, 44(4), 91–93.

Hay, B., Bishop, M., & Nance, K. (2009). Live Analysis: Progress and Challenges. IEEE Security & Privacy, 7(2), 30–37.

Hay, B., & Nance, K. (2008). Forensics examination of volatile system data using virtual introspection. SIGOPS Oper. Syst. Rev., 42(3), 74–82. 

Herrerias, J., & Gomez, R. (2007). A Log Correlation Model to Support the Evidence Search Process in a Forensic Investigation. In Systematic Approaches to Digital Forensic Engineering, IEEE International Workshop On (Vol. 0, pp. 31–42). Los Alamitos, CA, USA: IEEE Computer Society. 

Hunt, R., & Slay, J. (2010). The Design of Real-Time Adaptive Forensically Sound Secure Critical Infrastructure. In Network and System Security, International Conference On (Vol. 0, pp. 328–333). Los Alamitos, CA, USA: IEEE Computer Society. 

Kilpatrick, T., Gonzalez, J., Chandia, R., Papa, M., & Shenoi, S. (2008). Forensic analysis of SCADA systems and networks. Int. J. Secur. Netw., 3(2), 95–102. 

Linda, O., Vollmer, T., & Manic, M. (2009). Neural Network based Intrusion Detection System for critical infrastructures. In Neural Networks, IEEE - INNS - ENNS International Joint Conference On (Vol. 0, pp. 1827–1834). Los Alamitos, CA, USA: IEEE Computer Society. 

Morris, T., Vaughn, R., & Dandass, Y. S. (2011). A testbed for SCADA control system cybersecurity research and pedagogy. In Proceedings of the Seventh Annual Workshop on Cyber Security and Information Intelligence Research (pp. 27:1–27:1). New York, NY, USA: ACM. 

Nance, K., Hay, B., & Bishop, M. (2009). Digital Forensics: Defining a Research Agenda. In Hawaii International Conference on System Sciences (Vol. 0, pp. 1–6). Los Alamitos, CA, USA: IEEE Computer Society. 

Nehinbe, J. O., & Nehibe, J. I. (2012). A Forensic Model for Forecasting Alerts Workload and Patterns of Intrusions. In Computer Modeling and Simulation, International Conference On (Vol. 0, pp. 223–228). Los Alamitos, CA, USA: IEEE Computer Society. 

Peterson, D. (2009). Quickdraw: Generating Security Log Events for Legacy SCADA and Control System Devices. In Conference For Homeland Security, Cybersecurity Applications & Technology (Vol. 0, pp. 227–229). Los Alamitos, CA, USA: IEEE Computer Society. 

Richard,III, G. G., & Roussev, V. (2006). Next-generation digital forensics. Commun. ACM, 49(2), 76–80. 

Rodrigues, A., Best, T., & Pendse, R. (2011). SCADA security device: design and implementation. In Proceedings of the Seventh Annual Workshop on Cyber Security and Information Intelligence Research (pp. 25:1–25:1). New York, NY, USA: ACM. 

Su, Z., & Wang, L. H. (2011). Evaluating the Effect of Loading Forensic Tools on the Volatile Memory for Digital Evidences. In 2010 International Conference on Computational Intelligence and Security (Vol. 0, pp. 798–802). Los Alamitos, CA, USA: IEEE Computer Society. 

Sutherland, I., Evans, J., Tryfonas, T., & Blyth, A. (2008). Acquiring volatile operating system data tools and techniques. SIGOPS Oper. Syst. Rev., 42(3), 65–73. 

Tang, Y., & Daniels, T. E. (2005). A Simple Framework for Distributed Forensics. In 2012 32nd International Conference on Distributed Computing Systems Workshops (Vol. 2, pp. 163–169). Los Alamitos, CA, USA: IEEE Computer Society. 

Yen, P.-H., Yang, C.-H., & Ahn, T.-N. (2009). Design and implementation of a live-analysis digital forensic system. In Proceedings of the 2009 International Conference on Hybrid Information Technology (pp. 239–243). New York, NY, USA: ACM. 

Zhu, B., Anthony, Joseph, & Sastry, S. (2011). A Taxonomy of Cyber Attacks on SCADA Systems. In 2011 IEEE International Conferences on Internet of Things, and Cyber, Physical and Social Computing. San Diego, California: IEEE Computer Society.