Once the benchmarking process in each sector has been finalized and the conclusions of this process have been extracted, the identification of the trends that each area is supposed to follow has been studied. The strengths and weaknesses in the OS sector have been analyzed to extract the best practices and recommendations.
As other areas have been identified, the trends within the embedded system domain they follow have been analysed too.
In this deliverable the results from task T 2.5 are shown together with recommendations and best practices.
The aim of this deliverable is to present the recommendations and best practices of the project considering the results obtained from the benchmarking and the conclusions offered by the users and developers in the workshops.
The significance of other possible areas of interest is also considered in this document. The detection of other areas carried out in D 2.2 “OS Benchmarking results” brings about the analysis of the future trends within the embedded system domain not only centered on the main areas of the project.
A roadmap of each area is also proposed in this document in order to consider the trends that each one will follow.
The document presents the following sections:
* Chapter 2 is focused on the outcomes obtained from the users and developers. The data comes from two different sources: Information from the questionnaires given in the international events, and comments set out in the public forum of the SHARE platform.
* Chapter 3 gives an overview of the technological best practices in each area. The best practices in the other areas mentioned in D 2.2 “OS Benchmarking results” are also pointed out.
* Chapter 4 presents a brief roadmap for each sector. It also offers the future trend that the embedded systems domain present, taking into account the possible target area identified in D 2.2 “OS Benchmarking results”.
* Chapter 5 finalizes with overall conclusions and recommendations for this document.
The outcomes obtained from users and developers come from two different sources: Questionnaires filled by users & developers at different events, and comments and suggestions published in the forum that the project provides on its webpage.
Regarding the questionnaires, two different versions were developed. The first one corresponds with the first phase of the project, when the first version of the benchmarking tool was operational. The second version corresponds to the second phase of the SHARE project, coinciding with the second version of the tool, as well as the second international event.
The first questionnaire aimed at getting a first feedback of the SHARE project’s tools. It was used at two different events, the [email protected]/* */ Conference and Exhibition and the First SHARE International Event. As it was possible to test the benchmarking tool directly, the comments related with its usage were useful for the development of the following version.
Due to the incomplete development of all the tools, the outcomes obtained from this first batch of evaluations were focused on the usability of the benchmarking tool. The following comments were considered for the improvement of the tool:
* The lack of a clear statement about the tools’ objectives and final output.
* The absence of a user’s guide or additional text explaining the meaning and instructions of every step.
Not only were the weak points brought to attention, but also the strong ones, essentially based on the lack of a reference point for the OSS in the embedded systems domain.
The majority of respondents (85%) expressed a keen interest in the future developments of the project while they did not know about the tools developed by SHARE so far.
The second questionnaire aimed at getting feedback in a more specific way, regarding the services provided by the platform. A new version of the tool was ready, and the attendees to the Second SHARE International Event were able to test all of them.
Unlike the above outcomes that mostly came from the industry field, either users or developers, the second group of questionnaires were filled, mostly, by students. As the tools were at a mature point in their development phase, the attendances had the possibility of using all of them.
Regarding the outcomes, the majority considered the tools very easy to use (80%) and expressed interest in the following sections: Benchmarking tool (40%), Code Sharing (20%), Collaboration Lab (15%), Review and References (10%), Wiki (8%), SIGs (7%).
All the comments and suggestions were taken into account to carry out the changes that generated the final version of the tools.
The forum provided by the platform is also a way to get information about the project, as well as the internal forum for the members of the Consortium. All the comments and suggestions were analyzed to obtain useful ideas that were studied later. The different changes the platform has adapted, considering the results from the questionnaires and the comments in the forum are shown in the table below.
Creation of a user guide for the benchmarking tool
Creating a mailing list for member and registered users
The mail of the members that post the events on calendar is published in clear, and this will be unlikely for Spam increasing reasons. Any address should be masked
The added value of SHARE in terms of software evaluation and community on this topic should be highlighted. A page that will contain a brief description of everything can be useful
No change done
A diagram of the SHARE methodology is missing
Regarding the animated image on the home page, the user should wait for the interested link to compare before selecting it.
Now it stops after the first animated phase
In the registration form Name and Surname fields are separated by few lines, and this implies that any new user will fill the Name field with his/her complete name, and after that correct the info moving the surname.
The forbidden access to reserved pages by non-authorized users is not explained clearly and the perception is an error in the page.
Receive once a day a summary in the email regarding the updates, specially the new uploaded files, publication of news, publications of new events in the agenda. At least having the possibility to subscribe to this service.
Weekly summary will be better.
Registration does not work with Chrome browser
It works, at least in MAC
User could not register to the site using Firefox
It works, at least in MAC
If one selects "OSS benchmarking tool" two columns appear: evaluation contexts and software family. But not all combinations of the items on the columns are valid! It would be convenient to allow the selection of an item from the first column and then, according to such selection, propose the second column.
It is not fixed
The URL is very similar to www.share-project.org/ and they can be easily confused.
The "Share Web Space" section is not appearing on the home, but only once clicked some amount of links.
The above table shows the comments that were extracted from the forum and the current status they have. With regards to the evolution and progress of the platform, these suggestions and the changes carried out are just the beginning of a continuous improvement program. Due to the sustainable improvement of the platform, subsequent comments will be considered together with the ones already mentioned. Those remarks that have not been solved or are on-going will be deal with in the next months.
Mission-critical embedded software performs the core processing logic for pervasive systems that affect people and enterprises every day, ranging from aerospace systems to financial markets to automotive systems. For these reasons, standards definition and compliance is particularly important and relevant when it comes to this area of Mission Critical and Near-Real-Time systems and software.
This area of software engineering has rapidly evolved from a disordered development field to one with tight, prescribed methods and controls based on lessons learned from engineering disasters such as the ‘Three Mile Island’ and the ‘Therac-25′ incidents. Standards such as MIL-STD-2167 and DO-178B/C give guidelines for producing safe and reliable software which if followed with well defined methods do have a marked improvement on process quality. Whilst these standards are not the only factor in producing quality Mission Critical software, following a well defined process (similar to processes in other Engineering Disciplines) unquestionably improves the end product.
Mission Critical and Near-Real-Time Open Source Software solutions have a long way to go to adhere to software and process standards. The processes required are often extremely difficult to implement and coordinate for an unfunded ad-hoc open-source team. Many companies that invest in Mission Critical standards are troubled by the idea of sharing software as their business models frequently rely on ‘security by obscurity’.
The benchmarking and assessment activities conducted in the Nomadic Open Source Software domain lead to some interesting considerations.
Above all, it is evident how Open Source Software products have definitely gone far beyond pure amateur and hacker spheres and have recently successfully gained important and massive market shares, together with some important implications. In fact, Open Source Software comes with potentially severe impact on existing processes and development models that could lead to some problems if not promptly and correctly embraced. Open Source Software adoption is indeed not only a matter of grabbing and modifying some freely available software stacks; rather, it is a matter of entering a new way of producing software in a collaborative way instead of a well structured hierarchical way. This implies a huge degree of freedom, but also some precise written and non-written rules to obey to. Adhering to specific coding styles, versioning tools and communication means, just to mention a few, are key factors that should be carefully and thoughtfully pursued by an OSS adopter if planning to seriously jump on the OSS train. Disregarding the inherent software production process prevents the integration of the derivative work in the mainline community code trees and makes code maintenance suddenly become very difficult and time consuming. This aspect is sometimes underestimated by industrial adopters, and is particularly critical in large companies with well-established software production processes in place which are little prone to modifications or integration of new workflow management strategies and/or productivity tools.
Quoting a famous Linux kernel maintainer, “It takes time to make things work in Linux, but it takes even more to make the code clean and acceptable by the community”. A common mistake is not to pay attention to the latter aspect, resulting in manpower effort requirements planning underestimations and deadline misses. Remaining in Linux kernel domain, one of the most evident and important Open Source Software success stories, it has been recently noticed that more and more companies asking for Linux kernel base ports are expecting code deliveries to be fetched from public repositories and are not going to accept local branches or custom patches over vanilla kernels. This is because having the code integrated into the mainstream Linux tree is a guarantee to the customer for excellent code quality and reliability.
The recommendation is hence to adopt as soon as possible not only the bare Open Source Software code base, but also the development process behind it notwithstanding clashes or interference with existing and well established internal setups.
Nowadays, the improvement that new technologies have produced within the medical sector is unquestionable. From different points of view, this progress has accomplished the inclusion of ICT in diverse medical tasks.
Focusing our attention on the work done in the benchmarking process an interesting analysis can be done. In relation to the open source software found within the e-health sector there are some activities outstanding. These main fields whose development is in a greater position, is due to the current needs of the e-health community. Notably, these topics are closer to the medical area than to the technological one. However, the usage of the ICT is as necessary as the knowledge of medical issues. Some other software that can be related with the health sector, is not so particular or specific to this area. Due to that reason the number of software found are not so abundant. General applications can be used within this field without being considered a specific tool to any area.
Being more precise, the activities with a higher potential and development in progress are, in one hand those related with the medical images and vital signs, and in the other hand those related with the medical organizations.
Concerning the medical images, the tasks related with them can be classified in three groups that were already mentioned in the deliverable 2.2 “OS Benchmarking results”. The visualization, analysis and processing of medical images and vital signs has brought about the development of several applications in the field of open source software. The large number of tools accomplishing one or more of these tasks is interpreted as the importance that this activity is associated with, as well as the evolution to the usage of open source software within this action. Despite the fact the current research projects are not so centered in this kind of development itself, the usage of already existed applications in bigger projects is a common practice. Applications considered patients tools are still under development as far as open source tools are referred.
Current projects under development are, in large number, dedicated to the development of open source solutions centred on the patient as the target user. Special attention should be paid to those projects focused on the treatment of a particular illness. Within this framework, a specific field has a higher priority development than others. This is the case in all projects centered on cardiovascular issues.
With regards to medical organization applications, the number of them is also considerable. Due to the aim of getting more interoperability within the health sector, the development of applications related with the accomplishment of this objective presents a lot of activity. This increase not only in the development of open source solution by means of already existing open source software but also in the generation of open source applications shows the change of mentality in the developers’ side.
As a conclusion it can be said that the current activity within health research projects is the generation of products that are mainly open source, so the maintenance and development of the source code, and therefore of the project, should theoretically be simpler. These developments are mainly centered on the development of patient’s applications that are mostly likely to generate an open source embedded system. The final product that is obtained can be done using open source software, generating an open source application or both considerations at the same time.
As it has been mentioned in D2.2 “Benchmarking results”, “there is a huge amount of the already evaluated software that cannot be considered just in one of the sectors studied within this project”. Thereby the best practices in this field have also been analyzed.
Starting with the nomadic, multimedia networked applications, the build tools were identified as generic applications that can be used by other sectors.
At the very basis of the software development process in any field and area of interest, there are tools to translate high-level descriptions of a given application into an executable image. To cite a few examples, Software Development Kits (SDK) for virtually every mobile platform’s application development stack or programming toolkits for real-time embedded systems are built upon low-level tools for creation and management of executable binaries of simple standalone applications as well as complex environments based on full-fledged operating systems. With this respect, both industrial and academic projects are often based on the GNU tools. The GNU project provides a set of open source operating system kernels, as well an entire toolchain. The latter comprises the famous compiler suite GCC, supporting a wide number of powerful analysis and optimization passes, and – most important – capable of translating several programming languages into an ever-increasing number of embedded processor architectures. Furthermore, the GNU binutils provide all the necessary tools to manipulate binaries, such as assemblers, linkers, and various others.
From the academic point of view, the GCC compiler constitutes the opportunity of doing research onto a powerful compiler technology, widely supported by an active community of industrial, academic and amateur professionals. Many recent research projects have been conducted on the GCC compiler technology. To mention a significant example, transactional memory (1) has recently risen to the research forefront as a promising programming abstraction for parallel shared memory machines. Researchers from the Stanford University have implemented support for transactional programming within GCC (2) as an extension to the existing OpenMP programming framework. A project to develop transactional memory support within GCC as a standalone solution formerly started within the HiPEAC network of excellence (3). Developers at Red Hat then took delivery of the implementation and integrated it in the mainline GCC 4.4.
Academic research on these topics often focuses on early stages of the compilation process, where no architectural details are taken into account yet. When the compiler is used to translate high-level programming constructs into code that interacts with some kind of API library, the transformation can actually be considered as source-to-source. For instance, if the considered programming model augments standard C code with custom parallel keywords or directives, the compiler will emit transformed C code with calls into a parallel support library. Languages like CUDA, which require high programmer effort to specify efficient data partitioning and movements, are often augmented with source-to-source transformation techniques and applied on top of the standard compilation process to ease the task of application development.
The CETUS project at Purdue University is an interesting example of an open source product that provides such compilation technology, and is being adopted by an increasing number of research groups to conduct their studies on compiler techniques for parallel programming.
Regarding the near real time and mission critical area, almost all the families identified can be considered as generic applications.
Network infrastructures constitute the natural execution environment for each kind of distributed service or distributed application. Using Open Source Software assumes a key role in improving performance and resiliency of networked systems.
Regarding security issues, OSS is used to implement IDSs (Intrusion Detection Systems), systems that monitor network for malicious activities or policy violations. Such kind of systems implements two main functionalities:
* Intrusion detection. Intrusion detection is the process of monitoring the events occurring in a computer system or network, and analyzing them for signs of possible incidents, which are violations or imminent threats of violation of computer security policies, acceptable use policies, or standard security practices.
* Intrusion prevention. Intrusion prevention is the process of performing intrusion detection and attempting to stop detected possible incidents. Intrusion detection and prevention systems (IDPS) are primarily focused on identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators.
Bro is an open source Unix based Network intrusion detection system (NIDS). It is released under the BSD license.
OSSEC is a free, open source host-based intrusion detection system. It performs log analysis, integrity checking, Windows registry monitoring, rootkit detection, time-based alerting and active response.
Prelude is an "agentless", universal, security information management (SIM) system, released under the terms of the GNU General Public License.
Snort is a free and open source network intrusion prevention system (NIPS) and network intrusion detection system (NIDS) capable of performing packet logging and real-time traffic analysis on IP networks.
Suricata is an open source-based intrusion detection system (IDS). It was developed by the Open Information Security Foundation (OISF) and a beta version was released on 31 December 2009.
Apart from the detection and prevention of malicious activities, the generic Network Operator has to guarantee a specified level of Quality of Service (QoS) for each client, according to predefined Service Level Agreement (SLA) and often adopting Traffic Engineering mechanisms.
One of the most accepted solutions related to this field, is Multiprotocol Label Switching (MPLS). MPLS is a mechanism which allows the implementation of Traffic Engineering and support of QoS schemes in a very simple way. It makes it easy to create "virtual links" between distant nodes, and it can encapsulate packets of various network protocols. MPLS operates at an OSI Model layer that is generally considered to lie between traditional definitions of Layer 2 (Data Link Layer) and Layer 3 (Network Layer), and thus is often referred to as a "Layer 2.5" protocol. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames.
MPLS for Linux is an open source project that allows creation of a set of MPLS signalling protocols, and an MPLS forwarding plane for the Linux operating system. MPLS for Linux started out as a protocol analyser for the LDP protocol in the 1999, and then it was split into two packages: mpls-linux (the Linux Kernel based forwarding plane released under the GPL) and ldp-portable (a portable version of the LDP protocol released under the LGPL). The project moved to Sourceforge on 11/30/2000.
They also assume an important role application for the traffic management, such as applications for traffic sniffing, traffic characterization and monitoring, traffic generation. The aim of the use of tools belonging to this category is to:
* Analyse the traffic that crosses a network link, with the goal to detect the traffic source and eventually implement proper policy management rules;
* Identify the application that has generated and forwarded traffic through the network;
* Test innovative network infrastructures and mechanisms.
Traffic Management Open Source Software
Wireshark (originally namedEthereal)is a free and open-sourcepacket analyzer. It is used fornetworktroubleshooting, analysis, software andcommunications protocol development, and education.
D-ITG is an open source platform capable of produce traffic at a packet level accurately replicating appropriate stochastic processes for both IDT (Inter Departure Time) and PS (Packet Size) random variables (exponential, uniform, cauchy, normal, pareto…). D-ITG supports both IPv4 and IPv6 traffic generation and it is capable to generate traffic at network, transport, and application layer.
Finally, regarding the e-health sector, as it was extracted in D 2.2 “Benchmarking results” the generic applications that can be applied to other sectors were those related with the visualization, processing and analysis of data, and the databases storage and management.
The potential of all the applications related with the management of images and signals is present in many sectors. As a component of an application, communication is always based on the transmission of some kind of information that requires analysis and processing. Due to this issue, software intended specifically for these tasks is necessary. A great amount of open source software is being developed to carry out visualization and management of data.
As an extension of the usage of open source software in the e-health sector assisted living technologies have also been investigated. The inclusion of this software from the health point of view is present in other fields that in some way or another try to leverage the advantages of the open source community from this sector.
Regarding databases, they are necessary piece in many development processes. Many applications that fulfil this need can be recognised and investigated (MySQL, PostgreSQL).
The importance of a roadmap in the technological area is well known. The analysis of the current situation the technology presents in each area allows the identification of possible trends. Knowing the current activity as well as the future technical points needs support in the development process in order to provide reliable predictions.
The open source sector works similarly to the proprietary sector in terms of business. There is no sense in developing applications that wouldn’t be used or would be used poorly. The previous study of the current need each area required is essential in order to determine the short term trend each sector presents in order to solve areas lacking in development.
The SHARE roadmap has been derived by means of the results obtained from the benchmarking process as well as from the outcomes from users and developers. A brief roadmap of each sector has been carried out, considering the identification of other possible areas of interest within the project, a roadmap of the general trends of open source software within the embedded system domain has been performed.
Compared to other engineering disciplines, Software Engineering is a juvenile yet rapidly maturing industry. The field as a whole can still make basic mistakes of understanding and implementation. Borrowing on experience and expertise from other engineering disciplines, standardised processes are however, becoming increasingly widespread in all genres of Software Engineering (rather than just for Mission Critical and Near Real Time Applications). Whilst the Software Engineering industry is accepting that certifiable standards and processes are an integral part of their work, external attitudes have some way to go in understanding that the process of engineering Software should not be wildly different from a Mechanical or Civil Engineering process. The significant problem to overcome is one of attitude; it is widely considered that producing software is a quick and ‘cheap’ process and that ‘bugs’ are inevitable. This attitude would be unacceptable in any other Engineering discipline and there is much work to be done in changing this way of thinking. The evolution of standards, methods and successful Software Engineering projects as well as Software Engineering education will gradually erode this attitude and replace it with a far more professional and disciplined outlook familiar in other engineering practises.
The use of computer controlled systems has increased dramatically in our daily life. Processors and microcontrollers are embedded in most of the devices we use every day. This trend is expected to continue in the future. Several research projects on ambient intelligence, pervasive systems, home automation, and ubiquitous computing, aim at integrating computers in our environment even more in a way that they are hidden. Most of these devices share the following important properties:
* Limited resources (Several embedded devices are designed under space, weight, and energy constraints imposed by the specific application)
* Real-time constraints
* Dynamic behaviour.
Furthermore, low-level architecture features introduce a non deterministic behaviour on task execution, making the estimation of worst-case computation times very unpredictable. As a consequence, the overall workload of complex real-time applications is subject to significant variations that cannot be easily predicted in advance.
The combination of real-time features in tasks with dynamic behaviour, together with cost and resource constraints, creates new problems to be addressed in the design of such systems, at different architectural levels. The classical worst-case design approach, typically adopted in hard real-time systems to guarantee timely responses in all possible scenarios, is no longer acceptable in highly dynamic environments because it would waste the resources and prohibitively increase the cost. Instead of allocating resources for the worst case, smarter techniques are needed to sense the current state of the environment and react as a consequence. This means that, to cope with dynamic environments, a real-time system must be adaptive. That is, it must be able to adjust its internal strategies in response to a change in the environment to keep the system performance at a desired level or, if this is not possible, degrade it in a controlled fashion. Implementing adaptive embedded systems requires specific support at different levels of the software architecture. Some adaptation can also be done at the application level; however, it potentially incurs low efficiency due to the higher overhead normally introduced by the application level services. Normally, for efficiency reasons, adaptation should be handled at the lower layers of the system architecture, as close as possible to the system resources. For those embedded systems that are distributed among several computing nodes, special network methodologies are needed to achieve adaptive behaviour and predictable responses.
Real-time and embedded systems have historically been small scale. However, advances in microelectronics and software now allow embedded systems to be composed of a large set of processing elements, and the trend is toward significantly enhanced functionality, complexity, and scalability, since those systems are increasingly being connected by wired and wireless networks to create large-scale Distributed Real-time Embedded Systems (usually referred to as DRES). Such embedded computing and information technologies have become at the same time an enabler for future manufacturing enterprises as well as a transformer of organizations and markets.
To ensure operational advantage, mission-critical systems increasingly require distributed, real-time information availability for applications ranging from small-scale embedded control systems to large-scale ‘enterprise’ information management systems. Information stemming from multiple sensors in these systems must be distributed in a reliable and timely manner to multiple software components, that will combine, fuse and process this information in order to either present it to human operators or use it to control multiple distributed actuators ‘on time’ and in a safe manner. Furthermore, demanding fault-tolerance and other Quality of Service (QoS) requirements are also pushing developers to look beyond traditional middleware technologies (e.g. non real-time publish-subscribe or service-oriented client-server) for acceptable solutions.
The use of standards in open source software is crucial for products development. The high variety of products and the need for a short time to market make the reusability of the software elements mandatory.
The standards of open source software are basically of two different types: industry driven and de-facto standards. The first type is defined in consortiums like Khronos. The other type of standard is defined by open source software that is used largely and becomes a standard over time, especially on Linux platforms. The most important example is GStreamer in the multimedia field.
The adoption of a standard by a company implies the possibility to sell that same product for different platforms, to reuse the code in very different products. The open source software that is conformant to a given standard is more usable and change or upgrade can be easily implemented.
Any process that drives software into a standard should be encouraged for the adoption of that software in the industry field, and in particular in the nomadic sector, which is characterized by many different possible applications and platforms.
STMicroelectronics is heavily committed to tracking and contributing to existing and emerging industrial and de facto standards. In particular, STMicroelectronics is co-author of the well established OpenMAX Integration Layer standardization body and has designed and developed the first Open Source implementation of the same. The benefits that this approach gave in terms of company visibility and technological leadership encourage STMicroelectronics to invest further in the same area when addressing brand new or emerging standards such as OpenCL and EGL.
In the context of mobile application development the need for open standards is steadily increasing. Nowadays several mobile devices are available on the marketplace, as well as several software development solutions, which are based on a number of wide-spread operating systems such as Linux, Windows Mobile and Symbian. Besides proprietary systems such as Windows and Symbian – which do not even claim to embrace open source (although many components of their SDKs are based on OSS), many Linux-based systems cannot be considered “open”, being ad-hoc solutions, tailored to the characteristics of a specific device. Indeed, just as happens with other proprietary platforms, not all the features of the device are accessible from the software stack due to the lack of specific APIs.
A representative initiative aimed at the adoption of open standards in the mobile area is represented by the Open Handset Alliance. The OAH is a business alliance composed of 65 participants from several areas including mobile operators, handset manufacturers, software, semiconductor and commercialization companies. The goal of the alliance is that of allowing faster and cheaper development of innovative mobile solutions, which is achieved through “a commitment to openness, a shared vision for the future, and concrete plans to make the vision a reality”.
The first joint project of the OAH is a complete software stack for mobile devices, AndroidTM. Android features an operating system, which is based on the Linux kernel, a rich middleware to exploit all of the device’s core functionalities and a set of key applications. This solution is completely open. The Software Development Kit (SDK) – as well as the source code – can be downloaded from the website. All of the hardware features are exposed through specific middleware library APIs, thus allowing to customize the phone’s functionalities based on the user’s interests.
Another remarkable initiative in this field is OpenMoko. The OpenMoko community supports a completely open solution, from the hardware to the software stack, which includes a rich set of OSS to develop mobile applications. Being the SDK based on popular technologies among Linux developers (e.g. X-server, GTK), the learning curve is very gentle, thus requiring a short time to getting started with OpenMoko. The OpenMoko community actively cooperates with freesmartphone.org to develop Linux-based solutions for mobile programming that ensure interoperability and shared technology.
The development of e-health applications is moving into a sphere of usage of open source software, as well as to the generation of open source solutions. Several topics are evolving towards open source concepts, taking advantage of what open source communities can provide and contributing their outcomes as feedback.
In one hand there is the usage of open source software to generate e-health solutions within the embedded system domain. In the other hand there is the generation of open source solutions in this sector. One of the most stressed fields is medical imaging applications. Its evolution, especially the acquisition, analysis and processing parts, has allowed an increase in the quality of medical tests. Within this field the development of embedded systems is continuously advancing.
Medical images are fundamental for a diagnosis and the acquisition of data is a critical issue that has evolved in the last decades to the heavy reliance on ICT. The usage of already existing visualization, analysis and processing open source tools within the embedded system domain is not as commonplace as the usage of open source software in other fields. Nevertheless, in short time the usage of this software will become a usual practice.
A main subject for the implementation of ICT is the communication between systems. This task could not be avoided in the e-health sector. The transmission of medical information that other devices or applications have obtained is as important as the acquisition. Within this field, the production of open source embedded systems is increasing. An example of this evolution is the development of dedicated routers for the transmission of medical images, based upon medical imaging standards such as Dicom [GAT07]. The trend centered on the usage of ICT for the transmission of medical information is based on the evolution from the old medical records to the new electronic medical records. Due to this evolution it is necessary to develop appropriate infrastructures to allow the correct transmission of medical data.
The embedded systems domain within the e-health field has its main evolution in all those applications focused on the patient. Personal and portable monitoring systems are another key development within the embedded system domain for the e-health sector. Many of the research activities that are being carried out have as an outcome a personal device to monitor vital signs, such as ECG. In the long term the inclusion of this type of embedded system in the daily life of people will be a reality. Increasingly, the usage of personal devices, in order to control health more from the side of the patient than from the side of the doctor by the employment of new technologies is becoming more widely accepted every day. The trend, as it has been mentioned before, in the e-health sector is more focused on the remote monitoring of the patient, ensuring more freedom for the user and at the same time more control at every minute by the doctor. By means of open source software within embedded systems the achievement of this kind of treatment and control will become a reality in the future.
Currently, one of e-health’s branches is the development of m-health (mobile health) applications. The increasing investment of research capital in this area, together with the usage of open source software in the embedded system domain, causes the appearance of a new and attractive area of inversion and research. As the concept of m-health includes the usage of mobile telecommunication technologies as well as multimedia technologies, the contribution of other areas defined in the project are fundamental. The combination of open source software not directly related to this sector, and more with others, and software intrinsic from the health field, will promote the future appearance of a diverse group of open source embedded systems in the e-health area.
Regarding the integration within the e-health sector, this objective is the final destination. The trend, related to medical organizations promotes interoperability in order to get integrated systems capable of interacting with each other. Within this issue, the existence of specific standards related to the medical field is a basic point to go on with the integration process. By means of these standards, communication and therefore interoperability is possible. Some of them are DICOM and HL7 that were already mentioned in D 2.1 “Benchmarking methods and criteria”, which allow the standardization of the usage and development of open source applications.
Although the project has only considered a complete analysis and study of three sectors, as has been illustrated in the deliverable D 2.2 “OS Benchmarking results” section 220.127.116.11 “Other applications areas” there are many others from which to extrapolate results from work already done.
If we consider the open source software in the embedded system domain in general, some trends can be identified. Unless some of the areas identified have a lot of ongoing work, there are many other areas whose work is not as visible but they present a great future perspective. This is the case in the maritime sector. Although the existence of open source solutions is not visible, the development and research activities related to embedded systems within this area is significant. The trend, as with many other areas is the use of open source software to produce embedded solutions.
From another point of view, unless the on-going work is very slow, the current results are showing evidence for a significant boost in some areas will happen in the years to come. In this field the Home Automation area, referenced in D2.2 “OS Benchmarking results”, is a key example. The research activities in the academic field, as well as deployed ones in industry show the significant improvements that open source technologies are producing in all the devices and software associated. The relation between products related with Home Automation with the concept of Ambient Assisted Living (AAL) is inevitable. The concept of invisibility to the user is one of the main objectives of the AAL. By means of embedded systems this purpose is evolving to the usage of embedded systems within the home environment not only for home automation but also for environmental monitoring.
Another parallel case is Automotive OEM that is increasingly using embedded systems. Despite the fact that nowadays the usage of open source software is not a usual practice, the trend within this field is to change to open source software in order to enjoy cost-saving benefits. This adaption cannot be done in a short timetrame and the open source community might seek to address this by providing better standards compatibility.
Not only do these areas present increased usage of open source software to satisfy their requirements but many other areas will also see a similar increase in the near future. Embedded e-learning solutions are starting to become more commonplace so the evolution of this kind of tools is an expected trend in the following years. The usage of open source software instead of commercial software will be the habitual convention in order to promote the learning and participation of users and developers.
Taking into account what has been already said, the general trend is the expansion of the usage of open source software within the embedded system domain to other areas that still has not evolved to this way of development.
In the last years, OSS is making headway in the field of market applications. The development of applications in the market in a short period of time to preserve innovation of products has forced an increase in the speed of the development processes without overlooking the quality level. Due to this need, the usage of existing software to avoid replication of work has become normal practice. The potential benefits that the usage of OSS can provide to the industry are now widely recognised. The advantages of the existing open source community could be exploited by the market. Embedded systems hardware and software companies are starting to take open source communities quite seriously, often integrating open source solutions with major, consumer-facing products. As an example of this trend, the emerging technologies related with the mobiles applications have opened a huge market in this sense. The commitment to openness is evident in projects like OpenMOKO which provide a complete open platform for smart phone development, from hardware to software. Other notable examples of open software stacks for mobile platforms are Android and Maemo, at this time used in commercial devices.
From an economic point of view, OSS allows a significant reduction of the cost of software. By leveraging the developers from the open source community the costs related to the maintenance and development of open source software can be reduced by a large degree. It is important to consider the costs associated with licensing issues that will also help reduce. The amortization of any initial cost related with the usage of OSS can be done in the short-term.
From a technical point of view, the increasing development of products is closely connected with the costs that this work entails. Regarding that issue the Open Source Community provides the possibility of a continuous development cycle of a product with no costs associated. Open source software also favors greater flexibility and higher innovation thanks to the experiences that the developers can share in the community.
From a social point of view, the benefits that can be extracted from OSS should be also emphasized. Social networks promote new development processes as well as new work models. Without any cost associated, new perspectives of the usage of software can emerge.
Centering the attention to the target users, the developers, this new line of development can generate an increasing amount of jobs. Instead of considering it as a cost to add to software solutions already employed within an industry it could be a future benefit due to the cost that would be saved in future actions.
The re-use of code would make the exploitation processes, which used to be very short, longer and therefore better. Any modification of already prepared software used to cause the remodeling of all reliant products and hence the generation of a new version. As a result of the usage of open source software by industry, the development processes are more agile as well as more efficient. Not everything should be done from the beginning; the start point of the development phase could be found in another point of the software development, saving resources.
Not every use of open source software in industry is filled with advantages. Usage in commercial ventures has to be carefully monitored. In many occasions the refactoring of code or designs can take considerable effort, negating any time advantages. It is also important to take into consideration the licenses that OSS employs. Most open source licenses require the redistribution of source that can be use for any other individual or company. This can become a significant issue in commercial ventures when improvements have been made based on commercial or technological advantages.
The usage of open source software is not always possible. The precise requirements that an OSS should have for the company could be difficult to find. The requirements limit the product that could match with these conditions. It is not always possible to find an open solution that is exactly equivalent to the one being looking for. Reliability, timeliness or availability are crucial parameters that could limit the usage of OSS.
Open source allows academic projects to conduct research on innovative and competitive technologies, which are supported by active communities of developers and users. The establishment of new collaborative frameworks would promote an increase in the number of developments as well as the knowledge shared in each area of interest. An example of this activity is the research on compiler technology which has one of the the largest number of supporters – the GCC compiler collection. GCC allows researchers to work with powerful compiler technology, which boasts features like OpenMP and transactional memory programming support, automatic parallelization and vectorization. Other significant initiatives in open source compilation technology come from the LLVM compiler. Source-to-source compilation is also available in open source. An example is Purdue University’s CETUS compiler.
Within the academic field, the usage of already existing software that at the same time can be modified, allows developers to begin their work at an already existing point. New developers find in open source software a starting point to begin their development processes as well as a reference for their work. The promotion of OSS causes the establishment and evolution of open source communities.
The existence of open source communities allows the developers with knowledge sharing skills to promote new developments based on existing applications. Users with similar skills can exchange knowledge by managing OSS.
From an economic point of view, many R&D institutions such as Universities are limited as to what software they may deploy by extremely restrictive budgets. The usage of software, that in many cases is free or whose cost can be met by the university, opens new possibilities for their projects and aims.
The great amount of open source software also allows the selection of the best suited to the specific purposes of each project. This abundance of software offers the main advantage of increased compatibility due to collaboration and diversity within open source communities.
In the projects developed within academic fields, the existence of developer communities can promote compatibility of the software by utilizing collaboration from different areas as well as sectors. In this sense, the development of a new project will not be limited to the academic institution in charge of it. The openness of the open source community allows integration and participation from different places, not only geographical but also intellectual. The work in a collaborative framework promotes knowledge sharing, allowing the exchange of information and an increase in quality for the final product. Free collaboration among academic centers offers support in the development phase of OSS and endorses collaboration among organizations.
The SHARE project does not stop its activity in the development of the platform and its usage. There are two main activities that could allow the evolution of the project. On one side the collaboration with other initiatives or projects, and on the other side the development of a new project taking into account the SHARE project as a starting point.
The following table shows a summary of the main future trends the SHARE project presents:
Collaboration and integration with other initiatives
Integration of the SHARE project in another open source platform
* Maintenance of the SHARE project
* Knowledge sharing with other initiatives
Auxiliary tool for new developments
As a starting point of the design and development process the identification of better software to be used is a crucial phase. In order to achieve it the several tools provided by the SHARE project can be utilized.
* Database with several software evaluations in three different areas to make a previous analysis of the available resources
* Adapted QSOS methodology that can be used for own purposes
Starting point for other projects and products
The tools provided by the platform can be the starting point to implement new products or services by the identification of the limitations and possible future evolution.
* Analysis of business processes in order to substitute proprietary software for open source software
* Usage of the SHARE platform to benchmark possible open software
* Identification of limitations and creation of systems to overcome them.
Development of new possible tools in the platform
Complement the project with as many services as possible in order to create a huge open source platform created by an open source community and for the service of the open source community
* Evolution of the project to a more complete status.
* Inclusion of the open source community in ongoing project development
Regarding collaboration, the integration of other initiatives is a critical point to carry out. Different initiatives have been considered, from the MORFEO project to the ARTEMISIA initiative.
* MORFEO. Within the open source community, the MORFEO project represents a significant open collaborative framework where not only industry but also academic organizations have a common point to share knowledge as well as software. Through incorporation into this community, the maintenance of the SHARE project by the open source community would be achieved.
* ARTEMIS. Moving on the embedded system domain it is essential to consider the several possibilities that ARTEMIS provides. Because it aims to help European industry consolidate and reinforce its world leadership in embedded computing technologies, the different programs provided should be considered for possible future actions the SHARE project could consider. Within this field the importance of ARTEMISIA should be emphasized. As an association for R&D actors in ARTEMIS it provides to the R&D actors several opportunities for major R&D projects in order to form consortia and initiate project proposals for joint collaboration.
* HiPEAC. Networks of excellence like HiPEAC aim at fostering cooperation among research groups and providing funding opportunities. Many research projects and initiatives in academia are based in open source. SHARE aims at providing analogous services to researchers working in the embedded systems domain. Cooperation among SHARE and other strictly related initiatives such as HIPEAC have to be promoted, to create synergies among participants.
The main objective of this possible evolution of the SHARE project aims at evolving the project whilst not abandoning the work already done due to economic issues. The integration within other communities such as the MORFEO project could facilitate not only the maintenance of the existing platform but also the evolution and implementation of new tools.
From another point of view this collaboration with networks of excellence would provide the generation of new possible projects by the securing of funding opportunities as well as ways to combine this kind of information from the SHARE project with others and vice versa.
The SHARE project provides tools that can be used for different aims that represent one of the future evolutions the project can present. Beginning with the creation of new projects thanks to the knowledge and contact sharing provided, and followed by the development itself the SHARE project presents different progress lines.
The creation of a new project needs two main bases, on one side it is funding, on the other the members of the consortium that will carry out the project. SHARE project provides such tools to facilitate the communication among interested partners and information related with funding opportunities to perform new project proposals.
According to the re-usage of the work done, the development of new projects can find a starting point within the SHARE project. Beginning from the comparison of software allowed by the platform and continuing with the different share knowledge tools provided, the SHARE web space can offer a meeting point for developers.
Focusing our attention on some of the provided tools, the benchmarking tool could be useful for the identification process of better open source software for a particular purpose. After the definition of a project it is crucial to identify the tools for the development phase in order to obtain the maximum benefit with the least effort in the development phase. Open source software has become a significant source to get advantages in the development processes.
Development itself is facilitated by the web space by providing tools to share not only knowledge but code itself. For new projects and products, the advantages provided by the SHARE project’s tools represent a meeting point for the open source community.
Concerning the last point, three different but complementary perspectives can be considered. Starting from the several business processes the industry can present, the substitution of proprietary tools by open source ones can be used in order to improve the analysis processes.
Once the usage of open source software by the industry has been adopted, and the knowledge of the several tools available is considered, the next step would be the selection of the most suitable one for their purposes. The ignorance of the better choice allows the usage of the SHARE platform to benchmark the existing possibilities. This phase would allow an evaluation of the current open source software in order to decrease the costs of the proprietary software.
This process can visually demonstrate the limitations presented by the software under consideration. In this sense, the next step to follow would be not only the identification of the software limitation but also the development of systems that permit the correction of these limitations.
As a consequence of the collaboration and integration with other initiatives, an increase in the services provided by the SHARE platform can be achieved. The inclusion of the open source community and therefore of its developers promotes the modification and expansion of the tools already available. Within this issue, extending the focus of the project by introducing a new way of communication between the target users of the platform could be a good starting point of any developments associated with the project. Including alternatives such as the teleconferencing and virtual collaboration which may be considered as possible improvements not only of the platform but also of the working process within the open source field.
This deliverable presents the overall conclusions of the Knowledge Framework Development of the SHARE project. Through the results obtained from the benchmarking process in each target area, as well as the information gained through the workshops and international events, the future trends in each sector have been extracted. Furthermore, the identified areas that can take advantage of the work done within this project and can extrapolate the activities done within their field, have shown their own trends within the open source area.
The easy use, as well as the collaborative possibilities that open source software provides in the embedded system domain, and many others too, facilitates and promotes the interoperability among applications. This interoperability endorses the sharing of knowledge as well as the creation of collaborative communities with a common purpose.
The market and academic potential that Open Source Software presents has been illustrated. Regarding the particular tasks developed within the SHARE project, the potential it has in several points of view has been identified.
In conclusion, the SHARE project has demonstrated the knowledge framework created as well as the future possibilities the platform itself has as well as the future trends the embedded system domain presents.
[AND10] https://www.openhandsetalliance.com/android_overview.html Last checked: February 2010
[ART10] https://www.artemis-ju.eu/home Last checked: February 2010
[BAE07] Baek, Woongki and Minh, Chi Cao and Trautmann, Martin and Kozyrakis, Christos and Olukotun, Kunle. “The OpenTM Transactional Application Programming Interface” 2007, International Conference on Parallel Architectures and Compilation Techniques.
[BMC08] SHARE Project ICT – 2007.3.7 Deliverable D2.1 “Benchmarking methods and criteria”, November 2008. Accessible in: https://www.share‐project.eu
[BUT06] G. Buttazzo, “Research trends in real-time computing for embedded systems”. ACM SIGBED Review, 3(3), 2006
[CET10] https://cetus.ecn.purdue.edu/ Last checked: February 2010
[DAB09] Dabiri F, Massey T, Noshadi H, Hagopian H, Lin CK, Tan R, Schmidt J, Sarrafzadeh M. “A Telehealth Architecture for Networked Embedded Systems: A Case Study in In-vivo Health Monitoring” IEEE Trans Inf Technol Biomed 351-9 May 2009
[GAT07] R. Gatta, F.Abeni, M.Buglione, A.Peveri, F.Barbera, S.Tonoli, P.Fratta, S.M. Magrini Open-source, low-cost, high-reliability solutions for digital imaging systems: example of a “dicom router” Published in: La radiologia medica ISSN: 0033-8362 December 2007
[GCC10] https://gcc.gnu.org/ Last checked: February 2010
[GNU10] www.gnu.org Last checked: February 2010
[HIP10] www.hipeac.net Last checked: February 2010
[KRO03] M. Kroll , B. Schütze , T. Geisbe , H. -G. Lipinski , D. H. W. Grönemeyer and T. J. Filler “Embedded systems for signing medical images using the DICOM standard” Proceedings of the 17th International Congress and Exhibition Volume 1256 Pages 849-854 June 2003
[LLV10] https://llvm.org/ Last checked: February 2010
[MOR10] www.morfeo-project.org/ Last checked: February 2010
[MYS10] www.mysql.com/ Last checked: February 2010
[OHA10] https://www.openhandsetalliance.com Last checked: February 2010
[OMK10] https://wiki.openmoko.org Last checked: February 2010
[OMP10] www.openmp.org Last checked: February 2010
[OSB10] SHARE Project ICT – 2007.3.7 Deliverable D2.2 “OS Benchmarking results”, January 2010 Accesible in: https://www.share‐project.eu
[PER06] C.E. Pereira, L. Carro, “Distributed Real-Time Embedded Systems: Recent Advances, Future Trends and their Impact on Manufacturing Plant Control”. Proc. of INCOM’06, Saint Etienne, France.
[PIN05] Ping W, Jin-Gang W, Xiao-Bo S, Wei H. “The Research of Telemedicine System Based on Embedded Computer” Conf Proc IEEE Eng Med Biol Soc. 114-7 January 2005
[POS10] www.postgresql.org Last checked: February 2010
[ROS01] E. Rosen, A. Viswanathan, R. Callon, “Multiprotocol Label Switching Architecture”, RFC 3031, IETF, Network Working Group
[SCA07] Karen Scarfone and Peter Mell Guide to Intrusion Detection and Prevention Systems (IDPS), NIST CSRC special publication SP 800-94, released 02/2007
[SEL09] R.W. Selby, “Synthesis, Analysis, and Modeling of Large-Scale Mission Critical Embedded Software Systems”. Proc. of ICSP ’09, Trustworthy Software Development Processes: International Conference on Software Process, Vancouver, Canada.
[SCH09] Schindewolf, Martin and Cohen, Albert and Kar,l Wolfgang and Marongiu, Andrea and Benini, Luca “Towards Transactional Memory Support for GCC” 2009, International Workshop on GCC Research Opportunities.
[SEU08] Seung-wook Yang, Hee-chul Yoon, Jeong Cho, Sang-bum Kye, Tai-kyong Song “A Mobile Medical Device for Point-of-Care Applications” IEEE International Ultrasonics Symposium Proceedings 2008
[SHA95] Shavit, Nir and Touitou, Dan “Software Transactional Memory” 1995, Proceedings of the fourteenth annual ACM symposium on Principles of distributed computing.
[TAN 2007] Tan-Hsu Tan, Ching-Su Chang, Yung-Fu Chen and Cheng Lee “Implementation of a Portable Personal EKG Signal Monitoring System” Lecture Notes in Computer Science, Springer Berlin / Heidelberg, ISSN: 0302- 9743 (Print) 1611-3349 (Online) December 2007
[TAN 2009] Tan-Hsu Tan, Ching-Su Chang, Yung-Fa Huang, Yung-Fu Chen and Cheng Lee “Development of a Portable Linux-Based ECG Measurement and Monitoring System” Journal of Medical Systems, Springer Netherlands, ISSN: 0148-5598 (Print) 1573-689X (Online) November 2009
[VIL07] Vilem Srovnal, Marek Penhaker “Health Maintenance Embedded System in Home Care Applications” Proceedings of the Second International Conference on Systems ISBN: 0-7695-2807-4 Page: 17, 2007.
Consultancy, Education, Industry, Non commercial,
Research, SME (Small or medium enterprise),
Technology transfer, Other
Middleware for near real time and mission critical applications
Middleware for nomadic, multimedia, networked applications
Middleware for e-health applications
Business Card Store
Business Card Store
Business Card Store
OSS Initiative & Review
Best practices & Recommendations
 Status values: TOC, DRAFT, FINAL
 full specification and schematics of Neo 1973 and Neo Freerunner smartphones are available at https://wiki.openmoko.org/wiki/Neo_FreeRunner_Hardware
Networked Embedded and Control Systems. (2017, Jun 26).
Retrieved July 28, 2021 , from
This paper was written and submitted by a fellow student
Our verified experts write
your 100% original paper on any topic
Studydriver writers will make clear, mistake-free work for you!Get help with your assigment
Please check your inbox