Abstract
This master's thesis concerns development of embedded control systems. Development process for embedded control systems involves several steps, such as control design, rapid prototyping, fixed-point implementation and hardware-in-the-loop-simulations.
Another step, which Volvo is not currently using within climate control is on-line tuning. One reason for not using this technique today is that the available tools for this task (ATI Vision, INCA from ETAS or CalDesk from dSPACE) do not handle parameter dependencies in a satisfactory way. With these constraints of today, it is not possible to use online tuning and controller development process is more laborious and time consuming.
The main task of this thesis is to solve the problem with parameter dependencies and to make online tuning possible.
Volvo technology (VTEC) is an innovation company that provides expert functions and develops new technology for “hard” as well as “soft” products within the transport and vehicle industry. Among other things VTEC is working with embedded control systems. For one of the embedded control systems particularly “Climate Control Module (CCM)”, VTEC is working with the whole chain. VTEC does this for Volvo Cars, Volvo Trucks, Volvo Construction Equipment, Renault Trucks and Land Rover.
The work process for embedded control system developmet is typically as follows:
It is an iterative process, but there is one problem for the last step, which limits the possibilities of working iteratively. Control design is typically made in MATLAB/Simulink and Fixed-Point implementation is typically made with a tool such as TargetLink. During these steps the parameters may be handled in an m-file. When going to the on-line tuning step however, the parameters are handled in a tool such as ATI Vision, INCA or CalDesk. Once you have taken this step the connection to the m-file is lost. Therefore the last step is somewhat of a one-way step. It is not completely impossible to go back to the earlier steps in the development chain, but the iterative process is not well supported by available on-line tuning tools of today.
Using the above vectors in interpolation tables, one table with Heating as input vector and BlowerHt as output vector and another table with Defrosting as input vector and BlowerDef as output vector would cause problems during on-line tuning process. Three of the elements are meant to have identical values, but the tools, as it is today would allow them to be tuned individually. This is just one of many constructs, which may be very useful as long as you are in the MATLAB environment but causes problems during the on-line tuning process.
The main goals of this master's thesis are:
In the era of earliest development of computers i.e. 1930-40s, generally computers were capable of doing a single task. Over time with the advancement in technology, traditional electromechanical sequencers presented the concept of programmable controllers using solid state devices.
“One of the first recognizably modern embedded systems was the Apollo Guidance Computer, developed by Charles Stark Draper at the MIT Instrumentation Laboratory.”
After the early applications in 1960, the prices of embedded systems have come down and their processing power has been increased dramatically. A standard for programmable microcontrollers was released in 1978 by National Engineering Manufacturing Association. This standard was for almost any computer-based controllers for example event-based controllers and single board computers.
When the production cost of microprocessors and microcontrollers fell, it became feasible to replace old, big and expensive components like potentiometers and varicaps with microprocessor read knobs.
With the integration of microcontrollers, the application of embedded systems has further increased. The embedded systems are being used into areas where generally computers would not have been considered. Most of the complexity is contained within the microcontroller itself and very few additional components are needed. So because of this most effort is in software area.(last sentence is difficult to understand).
Embedded Systems have several common characteristics.
Model-Based design in short MBD is a mathematical and visual method of addressing problems associated with designing complex control systems. It is used in many industrial equipment designing, automotive and aerospace applications. Here in this thesis our focus is on climate control of new vehicles. This methodology is used in designing embedded software.
Embedded software development consists of four steps:
Model-based design is quite different from the conventional designing method. In this methodology designer use continuous and discrete time building blocks instead of using long and complex software coding.
This model based design enables designer to fast prototyping, testing and verification. Along with all these advantages, dynamic effects on the system can also be tested in hardware-in-the-loop (HIL) simulation mode.
Some important steps in model-based design approach are:
Advantages of model based design compared with the conventional approach are as follows:
The conventional approach for electronic control unit, ECU, development is summarized in following four steps:
There are few problems with this conventional approach for ECU development.
First and very major problem is that the hardware circuits are made before the confirmation of control rules and results. Only this factor adds a big risk in the process of ECU development.
Secondly if some error is encountered during the program code testing, it is very difficult to judge whether the error is because of software codes or in the control algorithms. This programming of the control algorithm is itself a very time consuming process and it take additional time when some errors are encountered and the process of debugging. Since many people from different field of work are involved in this process so coordination between them also take time and it makes the development cost to increase.
That's why the conventional development process can not satisfy the demand of modern age and its requirements.
On the bases of integrated development environment, the modern development of electronic control units can be efficiently completed and tested. Using model based simulation and hardware-in-the-loop simulation it is very easy and convenient to eliminate software errors and to modify the control algorithms. Due to this the development cost is reduced and development efficiency is improved. This modern development process is called V-cycle development process.
This process is illustrated in Ffig. 1. (when you use figures from other publications, you have to get permission from the auther. It is not enough to put a reference)
Fig. 1. The V-Cycle of model-based software development.
This process is summarized as follows:
“XCP is a standardized and universally applicable protocol with much rationalization potential. It is not only used in ECU development, calibration and programming, it is also used to integrate any desired measurement equipment for prototype development, functional development with bypassing and at SIL(define) and HIL(define) test stands.”[16]
For calibration and measurements, it is common practice to connect electronic control units in a *CAN* network. For this purpose CAN calibration protocol is used extensively. With increasing demands of more sophisticated controllers, new electronic control units are becoming more and more complex and for that reason new networks are being developed such as, FlexRay, TTCAN etc.(give references)
To meet the needs of new networks, the measurement and calibration protocol should be more generalized and flexible. This generalized and flexible protocol is XCP (Universal measurement and calibration protocol).
XCP is independent of transport layers. So in XCP, “X” generalizes the various transport layers that
* Details about CAN are provided in Appendix A.
are used by the members of the protocol family e.g. [9]
XCP on CAN
XCP on FlexRay
XCP on Ethernet
XCP on USB and so on
(you have to refere to each figure)
Fig. 2. XCP support for different transport layers [10].
This chapter will give answers to the following questions:
Note: All examples used in this report are only for illustration purposes and are NOT the actual parameters used in climate control module of Volvo Cars and Volvo Trucks.
The complete process for developing embedded control systems is illustrated in Ffig. 3. First step of this development process is to define parameters and that can be done in the m-file. These parameter values are loaded into MATLAB base workspace from where TargetLink/Simulink model fetches these values to simulate the process.
After checking the simulation results and doing some modifications if required, C-code is generated by TargetLink. That C-code contains all the information about the control algorithm and input values. In the next step the auto-generated C-code is compiled using a Green Hills Suite.
Fig. 3. Complete Production(rapid prototyping process?) Process.
Green Hills' software together with GNU Make and VBF converter is used to generate a map file and VBF file (Volvo Binary Format). This vbf file is downloaded in the embedded controller. The map file is used to generate A2L file using TargetLink. This A2L file is required by the calibration tool (for this project ATI VISION is used for calibration) and then using this calibration tool we can do parameters modifications in ECU. These modifications are also called tuning.
As all parameters are defined in a m-file, some parameters depend on the values of some other parameters. It may also be possible that the values obtained as a result of calculation between two or more parameters are used in the definition of other parameters. So, all those parameters which contain some other parameters or calculations of some other parameters in their definitions are called dependent parameters e.g.
In above example parameters:
Thinking of parameter dependencies a question may arise in minds that, “Why do we need to introduce parameter dependencies at the first place?”
Answer to this question is that, when designing a control algorithm in a tool such as Simulink, it is convenient to use named parameters (variables) instead of hard coded numbers (constants).
For instance, if the highest fan available corresponds to a voltage of 13.5V. Designer may want to have a parameter for this, so that instead of using the value 13.5 at many instances of algorithm, the name of parameter specified for that value can be used. If one day that hardware is needed to be changed and for new hardware 13.4V is the maximum that can be used for highest fan level, then it is easier to change one parameter value rather than changing many hard coded values at different instances.
Sometimes it is good to have one parameter depending on another. For instance in a look-up table, there are several values in each vector and these values may depend on other parameters. It would be rather limiting if a vector or a matrix could only contains hard coded numbers.
So, the use of dependent parameters helps keeping a good structure in the algorithm. It makes easier to work with the parameters.
There are quite significant numbers of parameters which are dependent on other parameters. For instance in Climate Control P3, total number of parameters is 1618 out of which 227 parameters are dependent on other parameters and 1391 parameters are independent. We call independent parameters as “Base Parameters”.
Fig.4 Percent of Dependent parameters
To analyze the problem of parameter dependency, let's walk through the development process of embedded control systems and find out what exactly is the problem with parameter dependencies.
As the process starts with parameter definitions in m-file, so the investigation starts from m-file, see Fig. 5.. To visualize this process, an illustration with an example of parameter with dependencies in its definition is shown as follows:
Fig.5. Example of parameter definition in m-file.
After defining all parameters, the m-file is run in MATLAB. In this step all the values of dependency parameters are evaluated by MATLAB and are loaded into MATLAB base workspace. Precisely during this loading process the dependencies are replaced by their values and any information about the relation of a parameter with dependency parameter is lost.
Fig.6. Dependency loss in MATLAB base workspace.
As now the dependency information is lost, so this loss will propagate through all the further steps, for example in C-code generation, A2L file and in strategy(?) file.
Following F fig 7. shows that the propagation of dependency information loss. So in C-code there is no information with the help of which we can trace dependency parameters.
Fig.7 Propagation of dependency loss from MATLAB to C-code.
The problem caused by parameter dependencies comes to the surface during the calibration step.
During calibration the values of parameters are tuned. When the information of parameter dependencies is lost, then we have to tune each parameter value individually. This is shown in the Ffollowing fig 8.
Fig.8. Effect of dependency loss on development process.
So if a parameter is used, for instance, in the definitions of five different parameters, then we have to tune the value of that parameter at those five locations individually. If there is any calculation involved in any parameter definition, then we must do it manually and update the value. This process of changing values manually is very time consuming and error prone.
There is another possibility that to avoid doing these calculations and tuning parameter values individually. We can change the parameter values in the original m-file, where we have all parameter definitions and repeat the complete process again. This is very laborious work and it also takes a lot of time, so this possibility is not so feasible.
As the complete process for developing embedded controllers is a multistage process and it depends on four highly sophisticated software platforms. So there can be different approaches to solve the dependency information loss. Following are the possible platforms for doing modifications in order to handle the dependency loss problem.
Following is the in depth analysis of above mentioned platforms and possibility of finding a feasible solution.
When m-script, containing all parameter definitions, is run in MATLAB, all parameter values are evaluated and stored in MATLAB base workspace. Right at this first step dependency information in m-script is lost. The Rreason of this loss is that MATLAB base workspace support values belonging to only one class type. That can be “char”, “double”, “struct” or any other class but the values can not belong to a mixture of two or more class types, i.e., values cannot consist of two elements of an array belonging to “char” class and other elements of array belonging to “double” class.(I guess that a struct can consist of chars as well of doubles)
Fig.9. Supported Class types in MATLAB base workspace.
In our case of parameter dependency for example, we have an array of eighth elements. Second element and eighth element of our example array are names of some other parameters, so these names belongs to char class and rest of elements of that array are numerical values belonging to double class. So MATLAB evaluates the values of dependency parameters and replace all names with their corresponding values and our dependency information is lost.
Although there is a function in MATLAB called “eval” and this function can be used instead of dependency parameter name but this does not solve our problem because this function will evaluate the values of those parameters and eventually it's the value of parameter which is updated in the base workspace and dependency information is still filtered out.
Moral of the story is that we can not do anything in MATLAB to save our dependency information until unless MathWorks do some changes in MATLAB so that base workspace would be able to support values belonging to different classes in same definition. 4.2 Parameter dependencies and TargetLink
In TargetLink we can use custom lookup tables and we can include custom code. Let us suppose for a moment that by adding these custom lookup tables and using some extra blocks we manage to introduce lost dependency information in TargetLink model. But when TargetLink will generate C-code, most probably it will evaluate all those values and resulting values will be included in C-code.
There are two reasons for this behavior of TargetLink:
So TargetLink does not generate extra variables and pointer in C-code until unless some significant changes are done in TargetLink by dSPACE.
C-code generated by TargetLink can be modified and it is possible to add any kind of extra information but there are two reasons which make this possibility impracticable.
In calibration tool like ATI VISION, there is an option to use script written in Vision scripting language or in Visual basic. Instead of doing manual calibration we can automate calibration using the script.
In our case, we have matrices with dependencies. So in order to do calibration using thescripting option we have to write function for doing matrix calculations and then that script must be able to evaluate dependencies according to new values. So this option is not so feasible.
After analyzing all possibilities only one option is left. That is to develop a separate windows application which will extract dependency information from m-script, calculate the values of dependency parameters according to the values tuned in calibration tool and will implement those new values of dependencies back in calibration tool.
After analysis of all possible solutions, it is deducted that the most feasible solution to the dependency loss problem is a separate windows application which:
Among other solutions we have selected development of “separate windows application”, as a feasible solution. Major reasons for selecting this solution are as follows:
The solution is an application named “Dependency Calibrator”. It works in two steps.
In the first step the m-file is parsed and the information of dependency parameter along with their location in parent parameter areis extracted and rearranged in a way that it can be used in the second step that is calibration.
During the second part of the process, first of all the application will import data from VISION so that if user has tuned any value in calibration tool, that data will be updated in MATLAB and then the application will do calculations in MATLAB after that new values obtained as a result of those calculation will be updated again back to VISION. This cyclic process from VISION to MATLAB and back to VISION will update parameter values. If user has changed values which was used by other parameters, those new values will be updated on all locations where they are used. This is shown in the following figFig. 10..
Fig. 10. Overview of solution.
The application “Dependency Calibrator” is divided into two parts.
Detailed explanation of how this application isn working is as follows.
Parser works without any requirement of external software but in order to run “Calibrator” following softwares must be installed on your system:
MATLAB is automatically launched by the application but make sure to launch ATI VISION before you use “Calibrator” part of “Dependency Calibrator” application.
Project file is a key to control the “Dependency Calibrator” application. Instead of using hard coded paths for different files used in this application, an option is given to the users to select their desired locations. These locations can be specified in a separate file which is named as project file.
In this project file the instructions can be given after certain tags. One must be very careful because these tags should not be altered. While user inputs can be given after the symbol “@”.
“Dependency Calibrator” application is in fact capable of handling multiple m-files and multiple c-files. Directory path for these files can be specifies in project file.
The F first part of the complete dependency calibration process is the parser. When “Parser” is executed, a window appears showing two options, “Load Project File” and “Parse”. It is required to load the project file before hitting the “Parse” button. Once the project file is loaded, the parser will have all the information to start parsing. Parser application is shown in following fig.
Fig.11 Parser application
There are six operations done by “Parser” on input m-file(s) and c-file(s) which are explained as follows.
First operation done by parser on m-script is comment removal. It removes all the comments from m-file(s). Some comments start from the beginning of line and other are at the end of parameter definitions. If there is more than one m-files, then all those files will be merged into one file as a result of this step. The output file produced in this step will be without any comment.
The reason for removing comments is that, in next steps we have to convert multiple line parameter definitions to single line. For that it is required that there must be no comments. Second reason is that some comments contain the same structure as the parameter definitions, in fact those are old values of the same parameters. So in order to minimize any possibility of error, we have to remove comments.
After removing comments, the parser also removes empty lines and extra white space inside the parameter definitions.
It is required that all parameter definitions should be single lined. There are two reasons for this operation on parameter definitions.
First reason is that parser is reading complete file line by line, so it is important to read the complete information about a parameter in one step.
Secondly we have to separate all dependencies present in parameter definitions. It is also possible to read multiple lines but in doing that we face big problem of setting a new line record (in programming record is a tag which tells about the end of line by default it is “n”) which tell the parser that current parameter is finished here and new parameter definition has started. In that case we must add any symbol or specific number of white space or something like that which must repeat after each parameter definition. In our case there is no such pattern repeating periodically and symmetrically in the m-file and it is not practical to modify all m-files by putting a symbol after each parameter. So, this parser application handles this problem by converting all parameter definitions into single line definitions.
Up till this point all parameter definitions are converted into single line definitions and all comments are removed from the m-file(s). Next step in the parsing process is separating those parameters which are dependent on some other parameters. This goal is achieved by using regular expressions.
The regular expression searches for any parameter name in the parameter definitions and if there is any parameter name found in the definition, it saves that parameter in a separate file.
After this step we have all parameters with dependencies filtered out in a separate file.
Going one step ahead, now we have to parse each parameter to find where exactly the dependency lies. So this is a very crucial moment in the whole parsing process.
According to our m-file there can be three major groups of parameters.
Complete dependency can be defined as, when whole parameter definitions depends on some other parameter or some calculations of other parameters. e.g.,
Max_Fan = max(Maximum_Fan_Speed_Mode4);
Dependency in an array or vector can be defined as if there is some element of array or vector depending on other parameter or some parameter calculations. In this case we have to know, where precisely the dependency lies in that array or vector. For example:
VentFan_Speed = [0, Min_Fan_Speed, 30, 45, 110, Max_Fan];
In this example, VentFan_Speed is dependent on Min_Fan_Speed which is 2nd element and Max_Fan which is 6th element.
Third group of parameters can be dependencies in a matrix. This is even more complicated because in this case we have to keep track of two things, column index and row index.
The output of this step is according to following format.
Keyword; Parameter Name; Dependency Name; X-Offset; Y-Offset;
So this format is a semicolon separated string in which:
Keyword: can be any word but in our case it is defined as “Parameter”. The sole purpose of this keyword is to distinguish this string from any other information in the file that can be comments or some other information. So to be sure that this is the information of parameter dependency it must start with the specific keyword.
Parameter name: represents the actual name of parameter which has dependency in its definition.
Dependency name: is the name of parameters on whicth the parent parameter is depending on. This can be only name of other parameter or it can be the result of some calculation of other parameters.
X-Offset: In case of a 1D array or a vector X-Offset will be the location of dependency i.e. the number or element in the array. In case of a matrix X-Offset is column index of the dependency element.
Y-Offset: In case of 1D array or vector y-Offset will always remain “y”. This indicates that the respective parameter is a vector. In case of 2D array and matrices Y-Offset indicates the row index of the dependency element.
Zero based indexing is used in this format for X and Y-Offset. When the value of both X-Offset and Y-Offset is “d”, that means the complete definition of that parameter is a dependency.
This conversion of parameter dependency information from MATLAB format to new format is shown in following figFig.
Extracted parameter dependency information.
In the previous step we have generated the file which contains information about the parameter name, dependency name and the location of dependency in parameter definition. The problem here is that in the calibration tool, the names of parameters are not the same as were defined in the m-script. These names are changed by adding different tags during the C-code generation in TargetLink. So in order to find the respective parameter names we need to parse C-code.
The good news here is that TargetLink only change the actual names according to a certain pattern which can be selected and modified in the TargetLink model. So according to that pattern we can extract the corresponding names of parameters.
At this step we have dependency information of parameters from m-script and we have their corresponding names in C-code which can be found in calibration tool.
In this step parser will replace the names of parameters as defined in m-script with their corresponding C-code names.
The output of this final step of parsing is ready to be used for calibration process. These new parameter names are the same as defined in the strategy file of ATI VISION. So in the file generated in this step we have all information of dependency parameters and their positions in the definitions of parent parameters.
After completing this process the application will show a message informing about completion of the parsing process. By clicking “OK” this application will exit.
Second part of the dependency calibration process is another application which is interfaced with MATLAB and ATI VISION. Make sure that before running “Calibrator” a project is open in VISION otherwise the application will display an error message.
Calibrator is shown in the following figFig.
Fig.13 Calibrator application
If a vision project is open a calibration application is run then, at the start window two options are shown.
It is required to first load the same project file that is used for parser. Completion of this operation will be confirmed by displaying a message box. After this, the calibrator has got all information required to run and it is ok to press “Calibrate” button.A
When calibrate button is pressed the application export parameter values tuned in VISION to MATLAB and update corresponding parameters in MATLAB base workspace. If the value which is tuned belongs to dependency parameter then all the parameters depending on that value will be updated according to their relation defined in m-file.
After updating all parameter values, those values are sent back to VISION at their appropriate locations and in this way we get our desired results.
Detailed explanation of how this happens is as follows.
For explanation we further divide calibrator into two parts,
In this step the application will use MATLAB and ATI VISION as COM servers. When MATLAB is invoked as a COM server, it looks like as in Ffig.14. Original m-file which contains all the parameter definitions is required in this step. The path for that m-file is specified in the project file. The application commands the MATLAB server to change the current directory of MATLAB to the root directory of the specified file. After that it runs the m-file and that all parameter definitions are loaded in MATLAB base workspace.
Now to find out which parameters may be tuned in calibration tool, it is required to look into C-code generated by TargetLink for the variables belonging to the variable class “CAL”. Type of variable class for any variable can be changed from “Data Dictionary” of TargetLink.
Fig. 14 MATLAB invoked as COM server.
As this information is saved into a file during parsing, so the application will get this information from that file.
When the values of these calibratable parameters are tuned in VISION, the application gets these new values for updating them in MATLAB. But the parameter names in MATLAB are different. So the application will translate the names.
The program now knows the names of parameters as defined in m-file and their corresponding names in VISION.
To read the values of calibrated parameters from VISION, first of all the application will check the data type of those parameters, whether they are scalar, 1D array, 2D array, 2D table or 3D table. The method for getting values of parameters belonging to different data type is different. Once the data item type is know the program will send that parameter information to its respective method.
After doing this process the program will run the file containing parameter dependency information. This file was generated during the parsing process. According to this file all dependency parameters are updated with new values got from VISION. Now, as all dependency parameter values are updated so, program will save all this information in a “mat” file which is named as “calibration. mat”. This file is a binary file and all information present in the MATLAB base workspace is saved in it.
At this point the process of updating parameter values from VISION to MATLAB is completed and the next step is to update all the changes caused by changing the values of parameters which are dependent.
Here another very important point is that if a value is changed in VISION and that value actually was a dependency then, according to the requirement that the value must not be allowed to change until the change is made in the base parameter value. This calibration application does exactly that. Tuning of independent parameters is not affected by the application.
Now coming to the second part of calibrator, it is now required that the values of all parameter dependencies should be updated back in VISION.
For this purpose the application will load the “calibration.mat” file that was saved with new values in previous step. Another file required by this part is the final output file obtained in the parsing process. According to that file the program will find the name of a parameter as it is in the calibration file. The program will find the value of dependency parameters from the mat file in MATLAB and it also has the information about the location of the dependency in the parameter definition, so it will update the corresponding value of dependency in VISION.
To update the value of dependency at the right location the program will first check the data type of the parameter and according to the data item type it will send the information to the appropriate method. That method will check first that if the dependency is a part of array or a matrix or is it some resultant value of calculation between some other parameters. So according to that information the program will do all required calculation in MATLAB and then import the value to the right place in the ATI VISION.
When all the values are updated it will show the message that the values are updated and if there is any parameter that did not belong to variable class “CAL”, the program will show all these parameters with warning in a list box.
This process is iterative and it can be repeated as many times as the user wants. When all the parameters are calibrated then this application can be closed. By closing the application, the command window of MATLAB, which was opened as a COM server, will also be closed.
This process of updating the parameter values can be monitored in VISION using screen window and control items.
In the system there can be some parameters which are used indirectly. Indirectly means their values are used in some other parameters but they them self are not used anywhere in the TargetLink model due to which they can never appear in the C-code and as a result those parameters are not available in the calibration tool for tuning.
The application “Dependency Calibrator” handles this kind of parameters as well. For tuning these parameters we can create new data item in calibration tool and then add names of those data items into a text file. Path of this text file should also be specified in the “Project File” after the tag “Extra File for calibrating non-calibratable parameters”.
For creating new data item in ATI VISION, go to “Data Item Manager”. In “DataItemGroups” go to device name e.g. “CCM”. Open the “Characteristics” folder and click on “Values”. The window on the right side of this panel will display different data items and some other information about those data items. In this window by right clicking and then selecting “New” will show a dialog box titled as “Select Data Item Type”. Now select the type of data item according to the type of parameter. After selecting data item type, a dialog will appear asking the name of parameter. The name of data item should be selected as, actual name of parameter as defined in m-file followed by an underscore “_” and some other name according to your wish. By choosing this kind of name, the data item will be connected to the original parameter as defined in m-file. After that a dialog will appear showing the properties of that data item. Make sure that the “Base address” of this data item does not coincide with the base address of any other predefined data item. Memory type should be selected to “RAM [adjust and monitor]”, then you will be able to change the value(s) of this data item. After doing all these adjustments, click “Apply” and then click “OK”.
By following this procedure a new data item is created which was not calibratable by default, but now this parameter can also be tuned in the calibration tool.
All the steps of “Dependency Calibrator” are summarized and depicted in the following fig.(Add figure text)
(As I understand, you here show that one example seems to work, that is good for pedagogic reason. I think you have to tell something whether you have verified the functionality in some other more rigorous way. One way could be that you document a reasonable large set of examples. Another way can be that you let an experienced engineer try to use your application and document his judgment.)
Results of this project are demonstrated with the help of an example of somefew parameters which contain dependencies.
Following graphs are made in ATI VISION using control objects in Screen file.
Graphs in fig.15 show the values of parameters before tuning. So here it is required that if we tune any parameter value on which other parameters are dependent, then all the value of that dependency parameter should be changed at all instances where that dependency is used.
In this example there are two parameters on which values of some other parameters are dependent, we call these two parameters as “base parameters” which are as follows:
Minimum_Fan_Speed = 16;
Maximum_Fan_Speed_Mode4 = [114 133 144 151 168 173];
Fig.15. Values of parameters before calibration.
The parameter values to notice are outlined with red blocks.
Now when we tune the values of base parameters, the application will change these values at all instances where they are used as dependencies.
New values of base parameters are as follows,
Minimum_Fan_Speed = 20;
Maximum_Fan_Speed_Mode4 = [114 133 144 151 176 173];
Now as,
Max_Fan = max(Maximum_Fan_Speed_Mode4)
So the value of Max_Fan will become 176 according to the changed value of Maximum_Fan_Speed_Mode4
The calibrator application will update these new values of Minimum_Fan_Speed (i.e. 20) and Max_Fan (i.e. 176) in all other parameters.
Updated values are shown in the following fig.(number)
Fig.16. Values of parameters after calibration
Changed values are outlined with red blocks.
This thesis report concludes that the application “Dependency Calibration” handles the parameter dependencies quite efficiently. The fact that all calculations are done in MATLAB makes it possible that all kind of parameter operations which are supported by MATLAB are also supported by this application and we can use the capabilities of TargetLink/Simulink model to the maximum extent.
So this application contributes to following:
Current application saves updated parameter values in a “.mat” file which is a binary file. The application does not write or update parameter values in the original m-file. Future work related to this project can be developing a text editor application specifically for updating m-file. That application should save old parameter values by commenting it and then write a new parameter with updated values. This may also be achieved by using new MATLAB Editor API, available in MATLAB 2010a which is expected to be released in March 2010. This new MATLAB Editor API provides programmatic control over opening and saving files, navigating and modifying file contents, and querying file properties.
The master's thesis. (2017, Jun 26).
Retrieved November 21, 2024 , from
https://studydriver.com/the-masters-thesis/
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer