jueves, 13 de febrero de 2014

Investigación en el contexto de Educación Superior en la República Dominicana

La ciencia y la tecnología son reconocidas actualmente como factores decisivos para la transformación económica y social, no sólo para los países industrializados, en los cuales se pone de manifiesto el surgimiento de una nueva economía del conocimiento, sino también para los países en vías de desarrollo. Este hecho hoy resulta muy evidente en el contexto de una revolución científica y tecnológica que domina la escena internacional y se ha convertido en un dato político y económico de primera magnitud. La República Dominicana no es ajena a este fenómeno, y lo demuestra a través de la aplicación de distintas iniciativas gubernamentales orientadas a impulsar la producción científica local. Como ejemplo concreto tenemos el Fondo Nacional de Innovación y Desarrollo Científico y Tecnológico (Fondocyt) dedicado a desarrollar y financiar actividades, programas y proyectos de innovación tecnológica e investigación científica aplicada apuntando a establecer un sistema de promoción permanente de la investigación científica y tecnológica nacional. 

A pesar de las oportunidades que brinda el escenario nacional para acceder a conocimientos avanzados, formar recursos humanos y desarrollar actividades de carácter científico, existen algunos aspectos que impiden un desarrollo adecuado de las líneas de investigación en el país. Uno de los principales problemas que debemos resolver para poder integrarnos en el avance actual de la ciencia y la tecnología, es la colaboración interna entre entidades y personal académico local en el proceso de producción de conocimientos científicos y tecnológicos. 

Dado el crecimiento y la incidencia del Internet y las capacidades que brinda para desarrollar procesos asociativos, representa este una herramienta ideal para el diseño de modelos que transformen la red de un espacio de información a un espacio de conocimientos y colaboración distribuida. Esta capacidad ha dado pautas a una auténtica revolución en las formas de llevar a cabo investigación, redundando en avances en el conocimiento y, más aún, en el uso social del mismo.

miércoles, 12 de febrero de 2014

Recuerdos de COMPUEXPO - República Dominicana

Toda una generación de niños y jóvenes que hoy ocupan posiciones gerenciales en importantes instituciones nacionales e internacionales o que han desarrollado sus propias empresas, crecimos teniendo a COMPUEXPO como el evento esperado por todos aquellos adeptos a los avances tecnológicos. En una era donde las comunicaciones apenas se empezaban a desarrollar, épocas que solo podíamos leer en revistas y libros que llegaban a las bibliotecas o traía algún conocido o que escuchábamos sobre tal o cual tendencia, producto, servicio o nuevo dispositivo, no era sino hasta octubre de cada año que podíamos ver, palpar y conocer a través de la COMPUEXPO.

A finales de la década de los 80 y principio de los 90, en un COMPUEXPO de turno, se presentó a los ejecutivos dominicanos la maravilla del FAX (facsímil): la solución de negocios capaz de transmitir copias de documentos a través de las líneas telefónicas. ¡Parecía magia! ¡Cuanto no hemos visto a partir de ahí! En mi caso particular, gracias a un evento de COMPUEXPO pude tocar por primer vez un computador y aprender con apenas 9 años, mis primeras instrucciones de programación en lenguaje LOGO, en unos de los salones del Hotel Concorde, en aquellas famosas computadoras Tandy 1000.

Luego, año tras año conocimos el modem, el mouse, las interfaces gráficas, los chats punto a punto, los servicios de redes de área local, los servidores, monitores a color, los video juegos, las comunicaciones móviles, la Internet, los GPS; solo por mencionar algunas de las que ahora aparentan ser simples aplicaciones pero que en su tiempo resultaron ser la novedad del momento.

No obstante, la tecnología como tal o más bien el uso de la tecnología, puede representar una amenaza cuando no se la comprende, pues se convierte en patrimonio de un grupo limitado de expertos a sueldo. Pero a la vez, representa una oportunidad de democratización cuando aumenta en gran medida el número de personas que si la comprenden. 

Nuevos retos nacen de manera constante, detrás de los avances que emergen cada día frente a la tecnología, la medicina, el medio ambiente y otros muchos aspectos de nuestra vida cotidiana. Al mismo tiempo, estos nuevos retos, demandan cada día más de recursos humanos con talentos excepcionales tales como: valores humanos, originalidad, la ética, la pasión y creatividad dentro de un área del ejercicio específico.

Bajo este contexto, cito a continuación las palabras del Mons. Núñez Collado, publicadas en su libro Computación y Educación Superior, escrito en 1986:

“Es evidente el influjo positivo de la tecnología sobre el progreso humano, pero no podemos perder de vista que en la etapa de desarrollo que vive el mundo, el desarrollo tecnológico en general y la ciencia de la información en particular podrían ejercen un impacto negativo, sino introducimos en el mundo tecnológico, ingredientes morales, valores trascendentes y un fin humano válido en si mismo que mantenga en el hombre su sentido de la equidad, de la dignidad y de la justicia. El mundo de mañana dependerá más de sus preceptos morales que de su abundancia en bienes materiales o de instrumentos de dominación y tendrá como ingredientes imprescindibles la dignidad individual y colectica, la capacidad de pensar, de decidir y actuar con libertad, con responsabilidad y nobleza espiritual.”

En el futuro inmediato, nuestra sociedad dominicana, enfrenta el reto de educar a los ingenieros del futuro, solucionadores de problemas, creadores de nuevo conocimiento, capaces de llegar más allá de simples implementaciones tecnológicas para satisfacer las necesidades humanas y sociales, e inspirados por la sed de innovación.

No obstante la intensión declarada, las labores para poder llevar a cabo esta misión se caracterizan por la obligación de lidiar con una nueva generación influenciada por:

  1. La realidad de que estamos viviendo en tiempos exponenciales. “Si a lo largo de los últimos 25 años la industria aeronáutica hubiese experimentado la espectacular evolución que ha vivido la informática, un Boeing 767 costaría hoy 350 dólares y circunvolaría el globo terrestre en 20 minutos, consumiendo o unos 20 litros de combustible.”
  2. Contexto de un mundo (sociedad) en estado de crisis de valores, limitado en sus recursos naturales y matizado por la desigualdad y la tensión social.
  3. Estamos formando individuos de generaciones que pertenecen a una sociedad móvil, acostumbrada a lo efímero y que ve el capital humano como un activo personal y no institucional. Por ende, no están dispuestos a hacer carrera y servir toda su vida en una institución. No buscan ser formados para ser empleados, quieren ser emprendedores.
  4. La cantidad de información nueva y relevante que se generará en el 2009 es de 4 exabytes (18 ceros). Esto quiere decir que la cantidad de información técnica nueva, se duplica cada 2 años. Para los estudiantes que inician una carrera de 4 años, 50% del conocimiento adquirido durante el primer año, será obsoleto antes de terminar el tercer año de estudios.
  5. Según un estudio del Departamento de Trabajo de los EEUU, los 10 puestos técnicos más solicitados en el 2009 no existían en el 2004. Esto se traduce en la necesidad de preparar estudiantes para tareas que aun no existen, usando tecnologías que una no han sido inventadas y para resolver problemas que aun no sabemos que son problemas.

Formal Methods and Project Planning for the Software Process

From my experience as a software developer and complex software projects managers, I had learn over time that to successfully execute software project besides having a hard working team, it is essential to count with a clearly defined plan that all parties understand and endorse. 

Over the past years we had tried to align our administrative process to comply with CMMI standards. However, in practice, being a medium size company achieving this intention is very expensive, time consuming and doesn’t pay off from the customers perspective. Small and medium software development companies represent the largest segment in the software development industry in the US.

I believe that there are real values to following the process, but I also think whether companies follow their CMMI processes depend in large part on their customers. In the federal government market, I found that it is a requirement in the Request for Quote, but the customer doesn't really want to pay the costs associated with it. Formal models such as CMMI have been developed without taking into account small businesses and their limitations; neither has been adapted to facilitate its adoption. This brings to my attention the following questions:
  • How to calculate / to evaluate the return-on-invest of introducing CMMI in small companies and specifically Project Planning practices.
  • How many developers must exist so that process improvement and project management with all its components (PP, PCM, etc.) saves money in a feasible time frame? 
  • Is it necessary (from an official CMMI point of view) to introduce expensive tools or is it possible that small companies realize CMMI level2 without any tools?
  • Is marketing the major reason why companies invest in this certification or are there real values in following the CMMI processes?
  • Do companies actually follow CMMI processes after certification?
The Reality 

Process improvement in small enterprises is a problem that has been studied with more interest since 2005 (Mondragon, 2006). Process improvement in small enterprises is naturally limited by the constraints of small businesses:
  1. Company Cash Flow: Proper cash flow instantiates resources on a process improvement project. In small companies (fewer than 25 employees) normally the agenda of technology experts are on assigned above 100%. In many cases small businesses are not competent when making estimates of effort and performance expectations of the team, neither on planning or formally managing it projects.
  2. People Skills: People with higher education usually have developed analytical thinking skills much stronger than people who do not have this training. The development of training guides and process guidelines are required to deploy a complete solution for process improvement.
  3. Project Size: Project size is a variable that directly affect the amount of communication, information and skills needed for proper performance. In large projects the software engineering practices become essential to produce work that meets the objectives to meet the requirements, meet the schedule, respecting the project's cost, provide the expected quality and achieve the expected productivity (Goldenson, 2010). In this particular aspect, project planning has a determinant role in regards to the project success.
Project planning done right can bring peace of mind and even outright relief to the most complex projects. On the other hand, project planning done wrong is easy to detect: The weeks and months of delays, a blown budget, angry clients and likely bad ending.

Many things lead to project success and many other leads to failure. Successful project depends on a combination of many variables including practices, experiences, methodologies, internal and external factors, etc. However, we can conclude that among those important variables, appropriate Project Planning is one of the primary indicators for high chances to succeed in a project.

PP Process Area requires excellent forward planning, which includes detailed planning of the process implementation stages, task timeliness, fallback positions, and re-planning. Initial planning is not enough. Projects often take wrong turns, or initial solutions prove unfounded. The project manager who does not prepare to re-plan, or has not considered and planned fall-back positions when initial plans fail, will often find that the project first stalls, and then fails. We must remember that project management is not a straight line process, but an iterative process that requires agile rethinking as the known environment changes before your eyes (Anil, 1991). 

Project failure is preventable with good project planning based on a well-constructed deliverables-based Work Breakdown Structure and proper controls. There may be some casualties along the way, such as some reduction in scope, additional time, and/or additional cost, but with good project planning and timely intervention where required, these can be minimized.

Finally, going formal represent a big step for small companies but is a decision that requires some sacrifices in working time and money. However this should not be an excuse for not taking the advantages of the formal methodologies. Implementing formal project management and proper planning could be a very good first step. After all, small business should ask themselves: How are we going to eat this elephant (CMMI)? The only possible answer is simple: In small bytes! As stated in the Practices topic, start crawling, follow these guidelines and before you know you might be walking.

It is important for a good manager, to be knowledgeable of this techniques and methodologies. Planning ahead is the best medicine. Prevention is the best of all cures.

References

Anil, Iyer and Thomasson, David (1991). “An Empirical Investigation of the Use of Content Analysis to Define the Variables Most Prevalent in Project Successes and Failures”, Proceedings of the 1991 PMI Annual Seminar/Symposium.

Mondragon, O. (2006). “Addressing infrastructure issues in very small settings. In Proceedings of the First International Research Workshop for Process Improvement in small Settings” Software Engineering Institute, Carnegie Melon University.

Goldenson, Dennis Herbsleb, and James (1995). “After the appraisal: A systematic survey of process improvement, its benefits and factors that influence success”. Technical Report CMU/SEI-95-TR-009, ADA302225, Software Engineering Institute, Carnegie Mellon University.

Outsourcing & Insourcing: Current Trends for the Software Market

Enhancement on the agreements on international trade, the incorporation of new countries to global economic cycles, the increase in air traffic, the exponential increase in the quality and bandwidth of telecommunications, the disclosure of internet culture around the globe are all factors that are dramatically changing markets in all countries. Increased competition has led more companies to seek the expertise of all its processes. Focusing on core business and move to a third party support functions result in a high impact on costs and quality of services. This is the way that outsourcing promises.

Out-Sourcing Trends 

Enhancement on the agreements on international trade, the incorporation of new countries to global economic cycles, the increase in air traffic, the exponential increase in the quality and bandwidth of telecommunications, the disclosure of internet culture around the globe are all factors that are dramatically changing markets in all countries. Increased competition has led more companies to seek the expertise of all its processes. Focusing on core business and move to a third party support functions result in a high impact on costs and quality of services. This is the way that outsourcing promises.

This service is an analogy of industrial production processes, can reduce risks in the construction and maintenance of software projects, provides direct benefits on the reliability and satisfaction of the delivered products, providing a clearer budget and timetable more limited projects.

This concept of service allows for the optimization of resources, technological potential and the advantages gained through economies of scale and improved cost-benefit ratio. It can be applied to the full development of new projects or some modules, as well as for the maintenance of production systems. There are three working schemas that could be applied under this model: Complete Project Development, Functional cases (parts or modules of a project) and Resource / Day-Hour (specific functions).

Clients who outsource software to outside providers are expecting nothing less than great quality, as the IT development outsourcing scene matures. After about a decade of growth, it is time for superior customer service, reliable organization, modern management and, most important of all, top-notch solutions. Knowing that they can turn to hundreds of other outsourcing firms competing for their budgets, clients are likely to reward not just affordable prices, but sustainable high quality (ClearCode, 2011).

New concepts are also emerging within the software outsource business, and along with these new concepts new forms of business appear. That is the case of nearshoring, farmshoring and cloudshoring. Each of these terms means outsourcing to a nearby country, to a rural area or moving operation to an IT cloud (For computing power, storage, bandwidth, processing, etc) respectively. According to the forecasting made by Clear Code in it article “Software Outsourcing Trends for 2011”, in 2011, these ideas will make it possible to cut costs further, as well as improve management and control over third-party contract execution.

Cloud sourcing which has often been predicted as the death of outsourcing, will soon merge with the existing outsourcing market and provide better opportunities for the entire industry. Infrastructures supported by cloud resources and based on SOA principles will encourage smaller outsourcing providers, which will in turn energize the outsourcing market by heightening competition and lowering prices (OutSource2India, 2011).

International outsourcing of services has increased in the United States but still remains low, based on our economy-wide measure using International Monetary Fund trade data. Imports of computer software and information plus other business services as a share of GDP were only 0.4 percent in 2003. This share has roughly doubled in each decade; from 0.1 percent in 1983 to 0.2 percent in 1993 and to 0.4 percent in 2003. The United Kingdom has a higher outsourcing ratio than the United States at 0.9 percent in 1983, 0.7 percent in 1993, and 1.2 percent in 2003 (Amiti, 2004).

Finally Industry experts predict the emergence of a Latin America outsourcing boom especially in Brazil, Mexico, Chile, Colombia, Costa Rica and Peru. Service providers will also continue to shift their delivery centers to markets such as China, Philippines and Egypt, since these countries represent big markets with big demand for transformational and discretionary spend activity (OutSource2India, 2011).

In-Sourcing Trends 

The opposite of outsourcing can be defined as insourcing. When an organization delegates its work to another entity, which is internal yet not a part of the organization, it is termed as insourcing. The internal entity will usually have a specialized team who will be proficient in the providing the required services. Organizations sometimes opt for insourcing because it enables them to maintain a better control of what they outsource.

Organizations involved in production usually opt for insourcing in order to cut down the cost of labor and taxes amongst others. The trend towards insourcing has increased since the year 2006. Organizations who have been dissatisfied with outsourcing have moved towards insourcing. Some organizations feel that they can have better customer support and better control over the work outsourced by insourcing their work rather than outsourcing it. According to recent studies, there is more work insourced than outsourced in the U.S and U.K. These countries are currently the largest outsourcers in the world. The U.S and U.K outsource and insource work equally (OutSource2India, 2011 ).

Professor Matthew Slaughter from Dartmouth College, presented a study about the Insourcing Market in the USA. His findings remarked the following trends:
  • Insourcing companies employed over 5.4 million U.S. workers. This was nearly 5 percent of the private-sector total employment up from just 3 percent in 1987.
  • The share of U.S. private-sector capital investment accounted for by insourcing companies rose from over 8 percent in 1992 to over 10 percent—$111.9 billion.
  • For many years insourcing companies have accounted for around 20 percent of U.S. exports of goods—now $137 billion.
  • Insourcing companies paid their American workers over $307 billion in compensation. This was more than 6 percent of all U.S. private-sector labor compensation.
Pros and Cons

This form of contracting has its promoters and defenders, but also its detractors. Among the arguments against the sub-contracting, opponents mentioned:
  1. Professional employees or sub-contractors may not have a sense of loyalty to the company contracting the service because; in fact these senses belong to the contractor.
  2. That working conditions in which these workers are not usually play best as, for example, are hired on a temporary basis but the workflow is continuous. Critics of sub-contracting system argue that this figure is a contractual covert abuse labor rights.
  3. That the system of outsourcing often eliminates jobs in the local labor market.
On the positive side, outsourcing is claimed to:
  1. Allow to obtain products and services of better quality elsewhere if they are not found in the local market.
  2. Reduce production costs.
  3. Reduce the number of routine tasks in the contracting company and allows employees to focus on more creative and productive aspects of the task.
On regards to CMM and outsourcing, the market has a sense of pressure that if the outsource providers are not CMM certified, the customers will doubt when giving out their projects. However, according to Mark Hillary and his article “CMM might be mature, but is it adapted”, the CMM models do not yield to a better quality, mostly because most of the smaller companies are not even equipped to provide their offshore suppliers with the required inputs in terms of specifications, validation, etc. This consultant also reported that the relationship he is setting with his customers does not touch on CMM (although they have the accreditation), but rather revolves about the frequency of communication, the quality of deliverables, mixed teams with people on both sides of the ocean, etc.

Immigration policies for foreign IT graduates 

According to a news release from the US Immigration and Customs Enforcement, ICE announced an expanded list of science, technology, engineering, and math degree programs that qualifies eligible graduates to extend their post-graduate training.

The current administration of Presidents Obama had reiterated their decision and strong support, as a part of comprehensive reform, for new policies that embrace talented students from other countries, who enrich the nation by working in science and technology jobs in the United States.

This reform includes the expansion of the degrees and fields that are considered important for the US economy. The list includes a comprehensive relation of career related to mathematics, high tech and computer science. According to the US Labor Office, these areas are suffering from a shortage of skilled workers. Again, the Obama administration is helping to address shortages in certain high tech sectors of talented scientists and technology experts-permitting highly skilled foreign graduates who wish to work in their field of study upon graduation and extend their post-graduate training in the United States.

Under the Optional Practical Training (OPT) program, foreign students who graduate from U.S. colleges and universities are able to remain in the U.S. and receive training through work experience for up to 12 months. Students who graduate with one of the newly-expanded STEM degrees can remain for an additional 17 months on an OPT STEM extension (US Immigration Office, 2011).

References

Amiti, .M. (2004). “Fear of Service Outsourcing: Is It Justified?”. Working Paper. International Monetary Fund.

Clear Code. (2011). “Software Outsourcing Trends in 2011”. Visited on May 17, 2011. Online  at: http://clearcode.cc/2011/01/17/software-development-outsourcing-trends-2011/

Hillary, .M. (2007). “CMM might be mature, but is it adapted?”. Visited on May 16, 2011. Online at: http://www.it-outsourcing-china.hyveup.tv/2007/05/cmm-might-be-mature-but-is-it-adapted/

Kirkegaard, F.( 2004). “Outsourcing-Stains on the White Collar?”. Institute for International Economics.

OutSource2India, (2011). “The Future of OutSourcing”. Visited on May 12, 2011. Online at: http://www.outsource2india.com/trends/future_outsourcing.asp

Slaugther, .M. (2006). “Insourcing Jobs: Making the Global Economy Work of America”. Tuck School of Business at Dartmouth. 

US Immigration Office (2011). News Release. “ICE announces expanded list of  science, technology, engineering, and math degree programs Qualifies eligible graduates to extend their post-graduate training”. Visited on May 12, 2011. Online at: http://www.ice.gov/news/releases/1105/110512washingtondc2.htm

The Human Body as a Computing Interface

< Interface /ˈint-ər-ˌfās/: The point of interconnection between two entities.>


Interfaces take places into our lives in the form of the various devices, analog or digital, with whom we normally establish some kind of interaction. This means that the interfaces are "tools" extenders for our bodies, such as computers, cell phones, elevators, etc. The concept of interface is applicable to any situation or process where the exchange or transfer of information takes place. Some of the ways of thinking to the interface might be like “the area or place of interaction between two different systems not necessarily a technological system”. Traditional computer input devices leverage the dexterity of our limbs through physical transducers such as keys, buttons, and touch screens. While these controls make great use of our abilities in common scenarios, many everyday situations command the use of our body for purposes other than manipulating an input device (Saponas, 2010, p. 8). Humans are very familiar with their own body. By nature, humans gesture out their body parts to express themselves or communicate ideas. Therefore, body parts naturally lend themselves to various interface metaphors that could be used as interaction tools for computerized systems.

For example, imaging rushing to a class while wearing gloves in a very cold morning, all of the sudden you have to place a phone call to your classmate to remind him to printout a homework, dialing a simple call on a mobile phone’s interface within this situation can be difficult or even impossible. Similarly, when someone is jogging and listening to music on a music player, their arms are typically swinging freely and their eyes are focused on what is in front of them, making it awkward to reach for the controls to skip songs or change the volume. In these situations, people need alternative input techniques for interacting with their computing devices (Saponas, 2009, p. 4).

Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, our sense of how our body is configured in three-dimensional space allows us to accurately interact with our bodies in an eyes-free manner (Harrison, 2010, p. 11).

In terms of interface suitability and human needs, researchers had been looking for ways to provide the user with greater mobility and enable more and more interaction. However, and although this interaction with the new interface is greater, users do not have a clear mental model of its operation, since in some cases cease to be intuitive and demand to the users a constant relearning. However, several research areas offers possibilities for full body incorporation into the interfaces process, such as: speech recognition, gesture detection, computer vision, micro gestures, skin surface, body electricity, brain computing, and muscles gesture, among others.

A Current research that explores different ways to use the features of one’s own body for interacting with computers, presented by The Imaging Research Center of South Korea, has divided this area into four types of human body based interfaces:

  1. Body Inspired Metaphor (BIM): Uses various parts of the body as metaphoric interaction.
  2. Body As An Interaction Surface (BAIS): Uses parts of the body as points of interaction. In this model, researchers are investigating what parts of the human body are more suitable to be used as interface for a given task. They are trying to find the best spot taking into account cognitive and ergonomic factors. So far, they had found that one of the most plausible locations seems to be the forearm of the non-dominant hand for its mobility, accessibility to the dominant hand, and visibility, although other parts of the body may be considered, such as on the lap (Changseok, 2009, p. 264).
  3. Object-Mapping (OM): It transports the user into the location of the object by becoming it, and manipulates it from the first person viewpoint both physically and mentally.
  4. Mixed Mode (MM): A mix of BIM and BAIS.

To draw an example on how the hand, the body and now the skin are being used in digital interaction process, we could refer to the film starring Tom Cruise in which the computer interface is manipulated by the hands of touch; or just how to recognize a body movement or gesture as the recent "Nathan Project" (Commercially known as "Kinect") from Microsoft, as a development of an interface for the Xbox 360. No doubt we are in the era of the touchpad, but we were far from thinking that simply by gestures, we could handle an interface, such as the prototype Gesture Cube, a "cube" that interprets the movement of the hands and we can act with different devices without having to touch a hand tool and moving a short distance. This Gesture Cube has a series of sensors that instantly detect the position and transmits the coordinates to a CPU installed in its interior, so that some previously programmed motions allow the execution of a specific task such as opening a program, call someone, listening to music. 

Unlike what we may think, GestIC as its creators have called this cube, does not have sensors that read the position of the hands in an optical process, but instead is equipped with an array of sensors that are grouped in fours to measure the variation of the magnetic field generated by the human skin that is produced according to the variation in the distance. The interesting addition to this, is that the interface allows you to associate a different device to each of the faces of the cube.

Another area refers to muscle sensing. While muscle-sensing techniques promise to be a suitable mechanism for body interface, previous work suffers from several key limitations. In many existing systems, users are tethered to high-end equipment employing gel-based sensors affixed to users’ arms with adhesives (Saponas, 2009, p. 19). Other efforts developed experiments using motor neurons stimulate muscle fibers into the skeletal muscles causing movement or force. This process generates electrical activity that can be measured as a voltage differential changing over time. While the most accurate method of measuring such electrical activity requires inserting fine needles into the muscle, a noisier signal can be obtained using electrodes on the surface of the skin. So far these experiments have yielded little success. (Mastnik, 2008, p. 64).

However, a recent project, called Skinput, demonstrated by Microsoft research represents an enormous advance in this area. Skinput is an input technology that uses bio-acoustic sensing to localize finger taps on the skin. When augmented with a pico-projector, the device can provide a direct manipulation, graphical user interface on the body (For Example, in a person’s forearm). The technology was developed by Chris Harrison, Desney Tan, and Dan Morris, at Microsoft Research's Computational User Experiences Group.

Wearable computing and virtual reality would be ideal application areas for body interface technologies. For instance, one of the defining goals of the virtual reality system is to create the feeling of being in the environment and one cause of breaking presence is the existence of intrusive wired sensing devices. While body based interfaces may not increase realism, they may still find good uses for imaginary virtual worlds for increasing self-awareness through self-interaction (Changseonk, 2009, p. 271).

Another study conducted by Nokia Research Center suggested the concept of “virtual pockets” for opening and saving documents in a wearable computing setting. Virtual pockets are physical pockets augmented with pressure sensors on one’s clothing woven with a special material for tracking the finger position on the clothing surface. A user can move files between different pockets, a process analogous to the “dragging and dropping” in the familiar desktop environment. Using finger pressure, files can be opened or saved. This can be viewed as mapping the desktop space onto the front surface of the upper body (Changseok, 2009, p. 269).

When exploring the body as an interaction device, challenges are how to utilize the corporal potential in the interaction context; and what influence and significance the use of the body has on interactive experiences. Characteristics to be considered when utilizing the body include: small/large degree of bodily involvement in the interaction; less/large accentuation of the significance of the body in the user experience; and finally, small/large degree of user influence (Karen, 2008, p. 2). According the experts in the subject, body interfaces can contribute to reducing task completion time and errors because it is natural and less confusing to users. However, they also appoint that excessive moving of body parts can cause muscle fatigue. Therefore, not all tasks are suitable for association with body parts

Another trend in the quest to integrate the human body into the interface process states that in order to accomplish such goals, besides HCI other areas area such as electronics, bio informatics and science materials, must evolve in their own subject matter. A research work called Communications Trough Virtual Technologies and sponsored by Association of European Telecoms concluded that unobtrusive hardware miniaturization is assumed to permit the necessary enabling developments in micro and optical electronics that is required for the usage of the body as a computer interaction device. Molecular and atomic manipulation techniques will also be increasingly required to allow the creation of advanced materials, smart materials and nanotechnologies (Fabrizzio, 2009, p. 33). 

In addition to these conclusions, the same study adds that it is also required significant advancements in the areas of: 

a) Self-generating power and micro-power usage in devices.

b) Active devices such as sensors and actuators integrated with interface systems in order to respond to user senses, posture and environment that can change their characteristics by standalone intelligence or by networked interaction.

c) Nano devices to have lower power consumption, higher operation speeds, and ubiquity.

In the current stage of HCI research, a slight finger tap, an acoustic vibration in the air, a movement of the eyes and tongue, or a pulse in the muscle can become a method for information transmission, and people are not only interacting with computers, but also with every object around them (Hui, 2010, p. 1).

Citations and References


Changseok, .C., (2004). Body Based Interfaces. Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces (ICMI’03)Fabrizzio, .D. (2009). Communications Through Virtual Technologies. Galimberti and G. Riva (eds.), La comunicazione virtuale, Guerini e Associati, Milano
Harrison, .S., (2010). Skinput: Appropriating the Body as an Input Surface. In Proceedings ACM CHI 2010 
Hui, .M. (2010). Human Computer Interaction, A Portal to the Future. Microsoft Research. 
Karen, .J. (2008). Interaction Design for Public Spaces. ACM MM’08, October 26–31, 2008, Vancouver, British Columbia, Canada.
Mastnik, S., (2008). EMG-based Hand Gesture Recognition for Realtime Biosignal Interfacing. Proceedings ACM IUI ‘08, 30-39.
Musilek, P. (2007). A Keystroke and Pointer Control Input Interface for Wearable Computers. In Proceedings IEEE PERCOM ’07
Saponas, T., (2009). Enabling Always-available Input with Muscle-Computer Interfaces. In Proceedings ACM UIST ’09.
Saponas, T., (2010). Making Muscle-Computer Interfaces More Practical. In Proceedings ACM CHI 2010.
Saponas, T. (2009). Demonstrating the feasibility of using forearm electromyography for muscle-computer interfaces. In Proceedings ACM CHI ’09.

Understanding Brain Computer Interfaces

The human brain and body are prolific signal generators. Recent technologies and computing techniques allow us to measure, process and interpret these signals. We can now infer such things as cognitive and emotional states to create adaptive interactive systems and to gain an understanding of user experience (Girouard, 2010).

Brain-computer interface (BCI) technology can be defined as an HCI system that can translate our mental intentions into real interaction within a physical or virtual world. The basic operations of a BCI are to measure brain activity, process it for obtaining the characteristics of interest; and after obtaining these characteristics, interact with the environment as desired by the user. From a standpoint of human-computer interaction, BCI like interfaces has two characteristics that make it unique compared to all existing systems. The first is its potential to build a natural communication channel with the human. The second, it potential to access cognitive and emotional information from the user. Our work intends to address the brain computer interface technology from a technological point of view by presenting their current context and technological problems and associated research. 

Computer interfaces as we normally know them are not natural in the sense that human thoughts must be translated in order to match the type of interface. For example, while using a keyboard, the thought of writing the letter "X", must be translated into a press of a finger on a given key. Although it is efficient and serves to accomplish the task, it does not represent a natural user interaction. In fact, if there is no training for it, the user would not know how to complete the operation. BCI interfaces in principle have access to the human cognitive information, as it is based on measuring brain activity, which is assumed to encode all these aspects. The scientific and technological challenge is to decode this information throughout the continuous and huge volume of data.

Current interfaces such as pointing devices, keyboards, or eye trackers, etc., are systems that convert the user control intentions into actions. However, there are not natural ways to model and implement the interaction, and in turn they lack of potential to access cognitive information such as workload, perception of system errors, affective information, etc. (Goriuard, 2010). BCI has the ability to build a natural communication channel for the human with the machine as it translates directly intentions into orders.

The idea behind this technology is very simple: it is to turn our thoughts into real actions around our environment. These actions can be directed to elements as simple as turning on or off the lights in our house, and up to a machine as complex as wheelchairs. The idea is simple but the technological challenge is enormous because it involves a highly multidisciplinary group of knowledge as the intersection of neuroscience, biomedical engineering and computer science. 

BCI seen as the machine that translates human intentions into action has at least three distinct parts (Minguez, 2009):

1) Sensor: is responsible for collecting brain activity. The vast majority of sensory modalities used in BCI from clinical applications, such as the electroencephalogram, functional magnetic resonance imaging, etc.
2) Signal Processing Engine: This module collects the signal measurement result of brain activity and applies filters to decode the neurophysiological process
reflecting the intention of the user.
3) Application: is the interaction module with the environment and shapes the final application of the BCI. May be moving a wheelchair or writing with the thought in a computer screen.

All research taking place in BCI can be classified within these three points. First, researches are working on new sensory modalities that enhance the temporal and spatial resolution measurements of brain activity, and improving the usability and portability of the devices in general. Second, much research is being conducted on strategies to address the BCI signal processing. The most relevant aspects that complicate the problem are that each individual has different brain activity and also the brain activity is non-stationary. The work is focused towards improving the filtering processes, automatic signal learning, and adaptation to each particular individual over time (McCullagh, 2010). The final aspect is to integrate the BCI in a useful application for the user, which is encouraging efforts in areas such as hardware and software integration and inclusion in actual application environments.

One important challenge that faces HCI research is the consideration about where to place the sensors or with respect to the human body. This election has important implications for usability, ethics and design of the system, since it determines the type of neuronal process that can be measured and processed later. If the sensor is placed so that no intrusion is performed on the human body is called non-invasive technique, which is the mostly used in BCI. However, other techniques exist that require performing a craniotomy, in this case we can talk about an invasive technique. Broadly speaking there are different levels of penetration and placement of sensor systems varying from penetrating the cerebral cortex to electrodes that measures the cortex activities for which the sensors are placed over the surface of the cortex. Beyond ethical problems with these invasive technologies, it faces the difficulty of maintaining a stable sensorial mechanism. Because a small movement of the sensor may involve a large movement at the cellular level causing the activation of body defenses attacking the “intrusive sensors” until it gets disabled (Ferrez, 2009).

Recover or replace human motor functions has been one of the most fascinating but frustrating areas of research of the last century. The possibility of interfacing the human nervous system with a robotic or mechatronic system, and use this concept to recover some motor function, has fascinated scientists for years (Minguez, 2009). The typical paradigm of work is a patient with severe spinal cord injury or a chronic neuromuscular disease that interrupts the flow of motor neural information to the body's extremities. One aspect that has enabled these developments has been the advance in technology since BCI are systems that allow real-time translating electrical activity result of thinking in order to directly control devices. This provides a direct communication channel from the central nervous system devices, avoiding the use of the neural pathways that can no longer be used normally because of the presence of severe neuromuscular diseases such as stroke, brain paralysis or spinal injuries (Ferrez, 2008). On the other hand, robotics has advanced enormously in the last years in various fields such as sensors, actuators, and processing capacity up autonomy

The first element in a BCI is a device for measuring brain activity, which is usually a clinical device that measures brain activity directly or indirectly. From all of the forms for measuring brain activity, electroencephalogram or EEG is one of the most widespread options. It is preferred by specialists because of its great adaptability, high temporal resolution, portability and range of possibilities derived from its clinical use. Normally, the installation of an EEG system requires a cap that fits over the head and usually includes integrated sensors for measuring the differential on the electrical potential. A conductive gel is applied to improve the conductivity between the scalp and the sensor (Ferrez, 2009). All sensors are connected to an amplifier that digitizes the signal and sends it to a computer. However, one of the biggest entry barriers for this technology is the use of the conductive gel that needs to be applied to the head. Current works related to this area focus on the elimination of this gel usage (Minguez, 2009).

There are many applications where we can think of related to this technology, such as entertainment, education, machinery operations, assistance for the elderly or physically challenged, etc. One of the first applications that are gaining terrain is the video game control by BCI and my means of the users’ thoughts. The qualitative leap achieved by the use of BCI in these technologies is enormous. Market studies shows that it will be one of the channels through which this technology will be introduced first. This is because video game users are a very large community, very tolerant to new technologies that spend many hours using the devices. This somehow facilitates the testing stages (Nijholt, 2008). 

Much research is also being conducted in what has been called intelligent environments. These involve intelligence embedded in the environment with capabilities of autonomous interaction with the user; with the clear objective to make life easier for people in different fields. For example: wearable computing. BCI in this context provides a direct communication channel with the environment to make control orders and in turn could provide information on cognitive and emotional status of the users, so the environment could make smarter decisions appropriate to each person (Ferrezm 2008). 

In 2007, a panel of experts to study the state of BCI technology worldwide was formed. The following research aspects were appointed. First efforts in this line are very significant in the U.S., Europe and in Asia, where clearly the amount of research in this area is to increase. Second, the current state of BCI is, if not about to, or entering into the generation of medical devices, but is expected to have a strong acceleration in non-technical areas and in more commercial environments such as video games, industrial automotive and robotics. Third, research efforts are oriented towards invasive technology in the United States, non-invasive in Europe and the synergy between the two types of interfaces and robotics in Japan. In the case of Asia and particularly China, has invested in programs of biological and engineering sciences, which has increased the investment in BCI and related areas. (Bergel, 2007). BCI research efforts throughout the world are extensive, with the magnitude of that research clearly on the rise. Even though, initial works on BCI focus on medical applications, BCI research is expected to rapidly accelerate in nonmedical arenas of commerce as well, particularly in the gaming, automotive, and robotics industries. 

Despite of the technological advancement, the operability of a BCI device in an out-laboratory setting (i.e. real-life condition) still remains far from being settled. The BCI control is indeed, characterized by unusual properties, when compared to more traditional inputs (long delays, noise with varying structure, long-term drifts, event-related noise, and stress effects). Current approaches to this are constituted by post hoc processing the BCI signal in order to better conform to traditional control (Cincotti, 2009). Being our input and output devices the major obstacles to effectively use computer tools and technology in general, it could be predicted that, in a moderate time (8-10 years), BCI will become an actual viable alternative to other input methods, like touchscreens, keyboards, and mice. 

Citations and References

Berger, T. (2007). International assessment of research and development in brain-computer interfaces. In: WTEC Panel Report.

Cinccotti, .F. (2010). Interacting with the Environment through Non-invasive Brain-Computer Interfaces. ACM UAHCI '09 Proceedings of the 5th International on Conference.

Ferrez, E. (2008). The use of brain-computer interfacing for ambient intelligence. LNCS, Springer Verlag.

Ferrez, P. (2009). Error-related eeg potentials generated during simulated brain-computer interaction. IEEE Transactions on Biomedical Engineering 55(3), 923–929.

Girouard, . A. (2010). Brain, body and bytes: psychophysiological user interaction. ACM CHI EA '10 Proceedings.

McCullagh, .P. (2010). Brain Computer Interfaces for inclusion. ACM AH '10 Proceedings of the 1st Augmented Human International Conference.

Minguez, .J. (2009). Brain Computer Interaction Technologies. Journals of Research group for Robotics and Real Time Perception. Department of Informatics. Universitat Stuttgart. No. 23l. Vol3. PP, 20-44

Nijholt, A. (2008). Bci for games: A ’state of the art’survey. ACM ICEC '08 Proceedings of the 7th International Conference on Entertainment Computing.



HCI Trends for a New Era

Experts predict that the computer, at least as we know it today, will disappear in no time. The computer will be integrated into other devices and the user will not be aware of their existence rather than by the functions offered. This seems to mean the disappearance of the explicit user interface and the development of a new implicit interface, focused on concrete tasks, more intelligent, and able to communicate with other elements. Is not about the physical machine anymore, we are getting away from the desktop, interaction styles are different. In a couple of years we might not be conscious of computers are around (Rozanski, 2010).

The field of HCI will be characterized by two trends: an evolutionary progress in dealing with current systems interactions by improving their usability, developing new methodologies and design tools that are adapted to the industrial environment; and a revolutionary trend, trying to create a new generation of interfaces that are characterized by being smarter, mobile and less visible to the user.

The evolutionary trend of HCI will work in the development of new concepts of interface usability, increasing the knowledge that we have about the user perspective and developing new methods for implementing these ideas. However, many experts believe that current development of interaction systems has reached an impasse given that most of new designs are found to be variations on the same subject (Moulton, 1998). Achieving a substantial advance in this area requires profound changes that introduce new styles of interaction, including new input/output devices or mechanisms. Until now it was expected that these changes would come from the advancement of virtual reality systems and multimedia. Nowadays, most experts are betting on ubiquitous systems, mobile computing, interfaces for natural language, etc. 

Intelligence, personality, expression, the ability to understand meaning, interactivity, and sensory richness are all essential to good interface design. Future computers should be able to sense human presence and emulate face-to-face communication. These agents, will be endowed with enough intelligence to be knowledgeable about the user's taste's, interests, acquaintances etc. (Negroponte, 1995). 

The goal of trying to break the paradigm of desktop computer is common to the works on mobile, ubiquitous and wearable computing. They claim that the services provided by computers should be as mobile as their users and should allow taking advantage of the constantly changing context in which they are used. This can lead to active environments in which these computers interact with each other and with the user in an intelligent and non-invasive mode. The philosophy of ubiquitous computing is the opposite of virtual reality. VR tries to introduce the person inside the computer. Ubiquity however, talks about computers integrated into the lives of people under the slogan the world is not a desktop (Weiser, 1994).

Wearable computing provides us with computers integrated and adapted to the user personal space. This personal space could be comprised by the users clothes, body surface and even the interior of it organism. Wearable computer extends the reach of human senses; improve their memory capabilities and increases intelligence (Ross, 2000). It should be also a gateway access between human beings and the outside world, filtering what is not relevant and serving as a protective wall of cyber attacks (Mann, 1998). Wearable computers represent a real challenge for actual HCI designers and engineers because interfaces as we know it, invades the personal spaces of the user. As in other technologies, one of the most important drivers of change is the market. For example, graphical user interfaces (GUI) opened the market for personal computer users without IT knowledge. The need to seek new markets leads to deepening the concept of usability. It is noticeable that, even now that we count with technical capabilities to meet these revolutionary concepts of interaction, companies that design hardware and software tend to be very cautious. They are still using standard interfaces due to fear of that any drastic changes may cause rejection of the user. Progress is purely cosmetic, colors, shapes, designs, but not fundamental. Designers, meanwhile, blame users. According to them, users are very conservative and cling to the systems they know, avoiding adventures with other systems, even if they promise better features (Jiang, 2000).

The great challenge is to be able to build general purpose portable computers, which accomplish with five attributes described with Steve Mann (Mann, 1998). These devices should be: PERSONAL: Human and computer are inextricably intertwined. PROSTHETIC: You can adapt to it getting the sense as a true extension of your body. CORPOREAL: It does not make the user look strange to others. PRIVATE: Others can't observe or control it unless you let them. CONSTANT: Always on, always running, always ready. Formal HCI principles for designing software and hardware interfaces are the corner-stone to accomplish these objectives. However, it stills represents a big challenge given the actual conditions of technology advancement, such as computing power, energy consumption, physical limitations, hardware volume, etc. Experts predict drastic changes for human-computer interaction. The ability to track eyes, recognize speech, and to sense touch are important ways in which future computers can be improved to better respond to the needs of the user (Rozanski, 2010). These changes have much to do with disappearance of the computer as we know it. However, predictions on new input/output devices and new styles of interaction are based on existing products; some of them are only at the prototype stage. This calls into question the premise of “total change”. Surely the changes that will occur in the next five to twenty years, if they are to be truly revolutionary, are impossible to predict based on today's standards.

References

Rozanski, Evelyn. (2010) “Lecture on Human Computer Interaction”. Gollisano College of Computing and Information Sciences. Rochester Institute of Technology. Rochester,NY.

Moulton, Dave (1998) “Optimal Character Arrangements for Ambiguous Keyboards”, IEEE Transactions on Rehabilitation Engineering, vol. 6, no. 4, pp. 415-23.

Starner, T. (2002) “Wearable Computers: No Longer Science Fiction”, Pervasive computing. 

Negroponte, Nicholas. (1995). “Being Digital”. New York, NY: Random House.

Preece, Jenny (1994). “Human Computer Interaction”. New York, NY: Wesley.

Mogridge, Bill (2006). “Designing Interactions”. Cambridge, MA: MIT Press.

Mann, Steve (1998). “WEARABLE COMPUTING as means for PERSONAL EMPOWERMENT”, Keynote Address for The First International Conference on Wearable Computing, ICWC-98, May 12-13, Fairfax, VA.

Mann, Steve (1998). “Humanistic Intelligence: `WearComp' as a new framework and application for intelligent signal processing”. Proceedings of the IEEE, Vol. 86, No. 11. Ontario, Canada.

Jiang, James (2000) “User resistance and strategies for promoting acceptance across system types” Information & Management, Volume 37, Issue 1, Pages 25-36. Amsterdam, NL.

Weiser, Mark (1994) “The World is not a Desktop”. Interactions, January 1994, pp 7-8

Ross, A. (2000) “Wearable Interfaces for Orientation and Wayfinding”, ASSETS’00, November 13-15, Arlington, VA.