Month: March 2019
The Use of Jail as a Dumping Ground for the Mentally Ill and the Failures
The absence of emotional wellness ailment has kept on declining as I would see it. Because of people who experience the ill effects of a psychological maladjustment and left untreated, many are captured due to their ridiculous conduct; the prison has fundamentally turned into a dumping ground for the rationally sick. As per the NY Times article "Psychological instability, Untreated Behind Bars", a gathering was held at the White House and the point in regards to the rationally sick was drawn out into the open. An individual from the National Sheriff's Association expressed that the rationally sick has filled our correctional facilities yet it would be ideal if successful treatment was given to those people (The Editorial Board, 2017). The leader of the United States has recognized that jail is the wrong spot for a rationally sick individual and that his organization would endeavor to address this issue. One great begin that the article referenced is broadened the general wellbeing framework into the correctional facilities and jail (The Editorial Board, 2017).
As expressed previously, emotional wellness is an issue that presently can't seem to be fathomed inside nearby correctional facilities and penitentiaries and has expanded because of the way that the essential factor regarding why they have wound up in prison or jail was a result of their untreated hostile direct. Sheriff John Layton of Martin County raised the worry about the rationally sick being imprisoned, expressed, "Frequently rationally sick individuals are imprisoned, not on the grounds that they represent a danger to open wellbeing, but since their practices are irritating to officers and people in general. Of the 2,300 detainees in his district, an expected 40 percent experience the ill effects of psychological maladjustment", almost 8 million a year is spent on dealing with the rationally sick (The Editorial Board, 2017). To break this cycle with regards to the rationally sick, numerous regions have or will sign the Stepping Up activity which is supported by the Council of State Government Justice Center, the National Association of Counties and the American Psychiatric Association Foundation (The Editorial Board, 2017). The reason for the Stepping Up activity is to enable neighborhood offices to create strategies for lessening the number of individuals with psychological instability in prisons (The Editorial Board, 2017).
Simply a year ago, Congress passed the Comprehensive Justice and Mental Health Act, which approved a large number of dollars for state and nearby endeavors to lessen detainment of the rationally sick and furthermore grows treatment for the individuals who are returning back to society (The Editorial Board, 2017). With these strategies being passed, its principal objective as expressed before is to lessen the rate of the rationally sick from going to jail. In spite of the fact that the wrongdoing submitted by the rationally sick may have been crazy or deadly, despite everything it ought not to deny them with the most proper, viable treatment. Examination of the Failure Issues that rung a bell while perusing the news article was the absence of preparing the Officers are given while drawing in with the rationally sick. In the article, one officer expressed, "regularly rationally sick individuals are imprisoned, not on the grounds that they represent a danger to open security, but since their practices are irritating to officers and people in general" (The Editorial Board, 2017).
I thought it was intriguing that the officer supported putting a rationally sick individual to imprison because of his conduct being "irritating". To me, that just demonstrates the absence of preparing officers to get for managing the rationally sick since they follow up on dissatisfaction. In the event that I needed to put a fault on this issue, I would state that the legislature assumes a major job about why the framework has fizzled the rationally sick. The motivation behind the administration is to build up equity for all, to ensure the privileges of the nationals. They can authorize laws and make laws that are reasonable for the residents. In spite of the fact that, the president knows about this issue and has tended to it, it turns into an issue that has been brushed under the floor covering. In spite of the fact that my news article was not explicit regarding who to fault for this issue, one article agreed with me as officers as one of the components because of the absence of preparing they have got when managing the rationally sick. The correctional facilities are running the biggest emotional well-being healing facilities and they don't know it. The rationally sick needs greater treatment alternatives than simply imprison however without the correct rule regarding how to deal with a circumstance like this and add the absence of asset accessible, it is hard to determine this issue. In the event that specific arrangements are not set up, officers will keep on doing what they are prepared to do. With everything taken into account, the rationally sick is a delicate theme and in addition a troublesome errand to comprehend because of the unpredictability of the issue. Edmonson said it better when she expressed that frequently transient arrangements are regularly utilized for complex issues.
The writers in this article examined the expense of psychological instability and substance use for people who are in treatment. The information for this examination originated from a three-year investigation of 203 people who are selected in some kind of treatment. With everything taken into account, they found that viable treatment of substance maltreatment among the rationally sick seems to decrease detainment. The creators likewise discovered that lodging could lessen the quantity of capture with respect to the rationally sick. This point, in spite of the fact that examines rationally sick prisoners would not be a decent source to pick up data from.
The article goes way out of my subject while examining the elements and cost in the lawful framework. This would not improve my theme for the basic certainty that it is out of point. Finn, P. (1989). Organizing Services for the Mentally Ill Misdemeanor Offender.Social Service Review,63(1), 127-141. The creator gives some suggestion with respect to how the medicinal services office and the law authorization can work together to shape a superior framework concerning the rationally sick. Periodically, policed officers will in general transport wrongdoers who may experience the ill effects of a psychological sickness. Generally, the procedure for an officer to concede a wrongdoer to an emotional well-being office takes some time because of an assessment, which is the reason numerous officers will, in general, take the guilty party directly to prison to spare time despite the fact that the correctional facility isn't prepared to treat the wrongdoer.
The creator isn't just putting the fault on law implementation yet additionally the human services offices for not getting included with the criminal equity framework. The creator trusts that with the best possible social administration organizations and office executives, new methodologies can be produced by existing systems to build up their very own game plan with the police concerning taking care of the rationally sick crime guilty party. Petrila, J. (2013). Psychological wellness courts may work, yet does it make a difference in the event that they do? In R. L. Wiener and E. M. Brank (Eds.). Critical thinking courts, 133-146. The creator talks about the psychological wellness court and how it tends to be effective. The creator presents three contentions with respect to how the psychological well-being court has been effective. In spite of the fact that my point talks about rationally sick detainees, my spotlight isn't exclusively on the emotional wellness court in this manner bringing about this article as not being strong of my subject. This present's article will probably examine if either the psychological well-being courts work and if so it even issues? In spite of the fact that there is some essential data in this article, I don't have the correct data that will bolster imprisons as dumping ground for the rationally sick.
References
Clark, R. E., Ricketts, S. K., and McHugo, G. J. (1999). Lawful framework association and expenses for people in treatment for extreme psychological sickness and substance use disorders.
Psychiatric Services,50(5), 641-647. Finn, P. (1989). Planning Services for the Mentally Ill Misdemeanor Offender.
Social Service Review,63(1), 127-141. Petrila, J. (2013). Emotional wellness courts may work, yet does it make a difference on the off chance that they do? In R. L. Wiener and E. M. Brank (Eds.).
Critical thinking courts, 133-146. Commented on Bibliography Clark, R. E., Ricketts, S. K., and McHugo, G. J. (1999). Lawful framework association and expenses for people in treatment for extreme psychological maladjustment and substance use disorders.Psychiatric Services,50(5), 641-647.
Cite this page
The Use of Jail as a Dumping Ground for the Mentally Ill and the Failures. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Chemotherapy Induced Heart Failure
Introduction:
Chemotherapy is group of cytotoxic drugs able to kill neoplastic cells[1]. These drugs induce rapid apoptosis, deprivation of growth and suppression of angiogenesis, or a concession in capacity of repairing [2]. They act through disrupting cell division by preventing DNA formation[3]. So, the use of such chemicals is very essential for treating the patients affected with malignant tumors. However, most types of chemotherapeutics target all rapidly dividing cells and are not specific to neoplastic cells[4]. This fact makes chemotherapy toxic to many non targeted organs [5].
Toxicity of heart is one of the worth side effects of cancer treatment, leading to a raise of deaths [6, 7]. Cardiac complications may appear in early phase or late during the course of the disease, and may vary from subclinical myocardial insufficiency to irrevocable heart failure or even causes death[8]. Therefore, the approach now is towards the detection of the highly susceptible patients and provides a method to maximize the benefit of chemotherapeutics usage while minimizing their worth effects or to find another more safe ways to deal with malignant cells [8, 9]. The present summary exposes the existing view of cardiac complications involved in chemotherapeutic usage along with the recent developments in the area.
Definition of cardiotoxicity
Cardiotoxicity has been described by the National Cancer Institute as the toxicity which affects the heart [10]. However, cardiotoxicity has been defined as a decline in ejection fraction of left ventricle [11, 12]. However, there is not yet a clear definition which describes cardiotoxicity that can be applied to all cancer types[12].
Factors affectingthe cardiotoxicity of chemotherapy:
Several studies illustrated that the type of cancer treating agent plays the major role in toxicity pattern [13-15]. Also, dosage of the drug detects the degree of cardiotoxicity [14]. Additionally, the use of drug combinations can be an effective factor.
Patient related factors: such as age, as well as treatment related factors including radiotherapy were found to have a main responsibility in the final cardiac monitoring results [16, 17]. Older age (>65 years) and young (
Cite this page
Chemotherapy Induced Heart Failure. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Failure Analysis of Blister Packaging Camshaft
Project Relevance:
Packaging is one of the largest industry sectors in the world, worth $280 billion. Consumer healthcare packaging represents 4% ($11.2 billion) of the packaging industry. As drug manufacturers approach the 21st century, they face a number of challenges that packaging can help them meet. In this work camshaft under consideration operates forming die of a blister forming station at 45 cycles per minute. When forming die is in closed condition, operating load of the assembly is maximum i.e.19828N, and in open condition it takes only dead load i.e. 266N in opposite direction. Thus, camshaft is subjected to alternate tension and compression due to bending loads. The shaft fails after few cycles wherein PLC screen shows the number of cycles completed till failure. The aim of this project is to modify the shaft to withstand such an operational alternating loads which intern provides a high service warranty to the customer. This modification of the shaft done on relaxing the fillet radius. The shaft can be modified with help of changing its filet radius. For that as per reference paper shaft 30 mm diameter and 45 mm diameter with its filet radius is 0.5 mm and 4.5 mm respectively. FEA of that two diameters with changing filet radius . From this comparison best result diameter shaft manufactured and take experimental test. Before experimental, we analysis a various filet radius with 45 mm diameter for FEA is done.
Literature Review:
J. D. Chougule1, R. G. Todkar,[1] showed the influence of a cam shaft in the transmission system of a blister packaging machine used for packaging of tablets. It is observed that the cam shaft fails due fatigue loading into two pieces during operation. To find out cause of failure, a finite element analysis was carried out. Results of stress analysis reveal that the highest stressed area coincides with the fractured regions of the failure of the shaft. The theoretical stress fairly matches with the sub-model stress values. The failure analysis shows that the fatigue failure of the shaft is due to weak section at the step provided for cam shaft mount. To enhance service durability of the transmission system of Blister Packing Machine, stress concentration at cam step was modified and material with high service durability, mechanical characteristics such as fatigue strength, ultimate tensile strength, and fracture toughness was selected.
The possible reasons concept of the automobile diesel engine crankshaft failure i.e. operating sources, repairing sources and mechanical sources has been presented in detail by F. S. Silva [2] and observed that the crankshaft failure occurs due to small cracks which are developed as an effect of thermal fatigue loading substantial overheating during shaft grinding process.
Stuart H. Loewenthal [3] has studied the method of power shaft design which accounts for a variable amplitude loading histories and their effects on limited life design requirements considering the effects of combined bending and torsion loading and a number of service factors.
S. Abdullah et al. [4] discussed the technique of analysis is explain to assess a fatigue life of a shell structure under the variable loadings using finite element analysis technique for a simulation works and it is observed that constant amplitude predict a life larger as long as one predicted by variable amplitude tests. To enhance service durability of the transmission system of Blister Packing Machine, stress concentration at cam step was modified and material with high service durability, mechanical characteristics such as fatigue strength, ultimate tensile strength, and fracture toughness was selected.
Norman E. Dowling [5] has done comparison of the material test data with various approaches to estimating the effect of mean stress on stress life and strain life behavior is explained and seen that the walkers mean stress equation gives superior results.
M. Omid et al. [6] has performed fatigue analysis of the connecting rod to evaluate a critical point calculated stresses and displacements under maximum compression and tension loadings. The caustic method is very powerful method to detect the stress distribution for complicated mechanical elements such as connecting rod. By drilling several distributed small holes on the connecting rod, the caustic method can predict accurately the stress value at each hole position.
G. Wang [7] has introduced a crack modeling approach for the prediction of fatigue failure of the camshaft and which allows the calculation of an equivalent stress intensity factor enabling standard fracture mechanics methodology.
H. Bayrakceken [8] investigated the reason of failure analysis of a camshaft of an automobile engine is carried out by using scanning electron microscopy and chemical analysis of fractured camshaft material to assess the reasonableness of fracture
M. Shabanet. al.,M.I.Mohamed, A.E.Abuelezz, T Khalifa,(2013)[9], studied the stress pattern of crane hook in its loaded condition, a solid model of crane hook is prepared with the help of ABAQUS software. Real time pattern of stress concentration in 3D model of crane hook is obtained. The stress distribution pattern is verified for its correctness on an acrylic model of crane hook using shadow optical method (Caustic method) set up. By predicting the stress concentration area, the shape of the crane is modified to increase its working life and reduce the failure rates. The complete study is an initiative to establish a FEA procedure, by validating the results, for the measurement of stresses. For reducing the failures of hooks the estimation of stresses, their magnitudes and possible locations are very important. From the stress analysis, they have observed the cross section of max stress area. If the area on the inner side of the hook at the portion of max stress is widened then the stresses will get reduced. The caustic method is very powerful method to detect the stress distribution for complicated mechanical elements such as hooks. By drilling several distributed small holes on the hook, the caustic method can predict accurately the stress value at each hole position.
Om Parkash et. al. [10] had studied the Optimizing the Design of Connecting Rod under Static and Fatigue Loading. The main objective of their work was to re-optimize the existing design of connecting rod of universal tractor (U650) by changing some of the design variables. In their work, the model was developed, analyzed and designed using CATIA 19, PRO-E and ANSYS workbench v12. Optimization of connecting rod was done under same boundary and loading conditions for variation in the few stress and fatigue parameters i.e. stresses, weight, life, damage, bi-axiality indication and safety factor. Stress concentration coefficient was varied to obtain the maximum cycles condition. The critical regions under both static and fatigue analysis were identified and improved. The connecting rod was modeled and optimized for the reduced weight, improved life and manufacturability. The results obtained from performed analysis could be used to modify the design of existing connecting rod, so that better performance i.e. reduced inertia, fatigue life and manufacturability could be obtained under varying static and fatigue conditions.
A. A. Pandharabale, Asst. Prof. A. J. Rajguru,[11] The main objective of their paper was to design a model of dual worm system for optimal load lifting capacity, optimal factor of safety and optimal efficiency for reduced power consumption. They have derived the optimal power for individual motor and select the motor for the application so as to make the device compact. The experimental validation part of the lifting force developed by the dual worm system is validated using test-rig. Various characteristics graph were plotted like Torque Vs. Speed, Power Vs. speed, Power consumption of motor under rated load, Efficiency of system Vs. speed. They concluded that the torque increases with the decrease in the output speed, Graph of power output indicates a rising trend up to certain output speed and then slightly drops indicating that indicating that the device will slow down slightly if the load is increased.
A.S.Dhavale , V.R.Muttagi,[12] In four strokes engine one of the most important component is camshaft, such a important part and that over the years subject of extensive research. In this study, causes of fracture of camshaft are discuses. By using scanning electron microscopy and finite element analysis methods are used for fracture analysis of camshaft. It is seen that, The analyzed camshaft is fractured after a very short period of usage of the car. The failure is occurred as a sudden fracture at very close to journal location, where there is a stress concentration. The main reason of the fracture is determined as a casting defect. As the failure was related to a material production problem it is likely to affect more than one vehicle. So, the camshaft of vehicles manufactured from that particular series of camshaft should be replaced. Also, the non destructive testing procedures of the component supplier should also be improved as the defect can easily be detectable by standard non destructive techniques.
Zhiwei Yu, Xiaolei Xu,[13], A failure investigation has been conducted on a diesel-engine crankshaft used in a truck, which is made from 42CrMo forging steel. The crankshaft was nitrided. The fracture occurred in the web between the 2nd journal and 2nd crankpin. The depth of the nitrided layer in various regions of the crankshaft particularly in the fillet region close to the fracture was determined by SEM observation and micro-hardness (HV0.1) measurement, combined with nitrogen content analysis by EDAX. The mechanical properties of the crankshaft including tensile properties, marohardness (HB) and surface hardness (HV1) were evaluated. Fractographic studies indicate that fatigue is the dominant mechanism of failure of the crankshaft. The partial absence of the nitrided layer in the fillet region close to the fracture makes fatigue strength decrease to lead to fatigue initiation and propagation in the weaker region and premature fracture. The partial absence of the nitrided layer may result from over-grinding after nitriding. In order to prevent fatigue initiation in the fillet the final grinding has to be done carefully and the grinding amount controlled to avoid grinding down the nitrided layer.
Aim of Project:
The aim of this project is to modify the shaft to withstand such an operational alternating loads which intern provides a high service warranty to the customer. This modification of the shaft done on relaxing the fillet radius
Objectives of Dissertation:
- To modify the shaft to withstand such an operational alternating loads which intern provides a high service warranty to the customer. For validation with finite element analysis (FEA) it is found that the stress is higher at the step, where failure was found in field.
- To overcome problem of sub-modelling technique, in CREO-PARAMETRIC 2.0 is used. Sub-modeling is the technique of studying a local part of a model with a refined mesh, based on interpolation of the solution from an initial, global model onto appropriate parts of the boundary of the sub-model. The method is most useful when it is necessary
- To obtain an accurate, detailed solution in the local region and the detailed modelling of that local region has negligible effect on the overall solution.
Research Methodology:
Literature Review
Study of Blister machine and its techniques.
Modeling of Camshaft
FEA of selected materials & cross-sections for particular load
Selection of best material & cross-section from FEA results
Manufacturing of camshaft from FEA result
Experimental testing
Comparison of experimental result with FEA results for given load
Conclusions
Experimental Setup:
Fig. 1: Experimental testing of camshaft on UTM
Results:
FEA results
Results can be obtained by FEA as well as Experiment.
Table 1 : Results of FEA
Diameter 30 mm 45 mm
Fillet Radius 0.5 mm 4.5 mm
Von-mises stress 120.27 Mpa 100.23 Mpa
Deformation 0.0070434 0.005869
After that, by FEA best result obtained, by taking fillet radius 4.5 mm. As the comparison of above result von-mises stresses is high at 30 mm diameter as compared to 45 mm diameter. This is also same as deformation, that means deformation is less at 45 mm diameter and it is more 30 mm diameter. Next step we want to check at 45 mm diameter, changing fillet radius and check out the results. And then this result compare to above table. Before that best result obtained at 45 mm diameter , so taking the dimensions and shaft is made on workshop. After the manufacturing of shaft experimental analysis is done on UTM. By using the strain gauge strain also calculated. In experimental analysis we can get the values of von-mises stress and deformation. After th values whuch are coming from the experiment , FEA is done . but in fea total load applied on shaft is 10 % reduced, because in real there is no possible to applied a load. So external weight is applied. Then the result comparison of experimental and FEA are given below,
Table 2 :Results of FEA
Optimised FEA
Diameter 45 45
Filet radius 4.5 4.5
Von-mises stress 103.87 100.23
Deformation 00020 000569
So, we take as 45 mm diameter and fillet radius takes 3 mm , 3.5 mm and 4 mm. and that obtained result by FEA is below table.
Table 3 : Results of FEA (with different filet radius)
Diameter 45 mm 45 mm 45 mm
Fillet Radius 3 mm 3.5 mm 4 mm
Von-mises stress 112.07 Mpa 107.52 Mpa 104.78 Mpa
Deformation 0.00656 0.006296 0.006136
After studying of all result, best result given by 45 mm diameter and its fillet radius 4.5 mm, so using this dimension camshaft is made. And this shaft using UTM stresses and deformation is calculated.
After that calculating the life cycle for 45 mm diameter and its different filet radius. Also comparing all result diameter 45 mm and its filet radius 4.5 mm gives good result as compare to all.
Table 4 : Results of FEA life cycle (with different filet radius)
Diameter 45 mm 45 mm 45 mm 45 mm
Fillet Radius 3 mm 3.5 mm 4 mm 4.5 mm
Life cycle 2.106e5 2.802e5 3.2499e5 4.1978e5
As the above result shows the, number of cycles increases, with decreasing the stress.
Conclusion:
This chapter consist the concluding remarks on current research work. The experimentation performed on Universal testing machine. The analysis result was also done in previous chapter result and discussion. The concluding remarks on experimentation and its results are covered and also necessary outcomes are represented in this chapter. The future scope for this project is also stated at the end of this chapter. The outcomes from the current research work are as follows;
From results of finite element analysis it is observed that the maximum stress value is within the safety limit. There is a great potential to optimize, this safety limit which can be done by removing material from low stressed region thus optimizing its weight without affecting its structural behavior. The maximum displacement value is also very less. So, the material from low stressed region is can be removed without affecting its strength and is within the yield strength.
Von-mises stress found on existing (120.27 MPa) and optimized (100.23 MPa) components are within the material yield strength.
Deflection measured and found on existing (0.0070434mm) and optimized (0.00569mm) model is very less.
Project Expenditure:
Sr. No. Name of Equipment Qty. Approx. Cost
(INR)
Material for Specimen 01 2000 = 00
Raw Material Cutting and machining 01 5800 = 00
Bracket Joining 04 1000 = 00
Experimentation and Lab charges ---- 8000 = 00
Miscellaneous Expenses --- 4000 = 00
Total Expenditure: Rs. 20800 = 00
Cite this page
Failure Analysis of Blister Packaging Camshaft. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Failure and Success of Jamestown
What is a triumph?
Triumph, a word not commonly used amongst the small world of middle schoolers, or anyone in general, means to be successful or to reach a great achievement. Although you may not use this word often, we all know one of its synonyms, victory, and have all experienced this joy whether it be you getting an A on your math test, which is your worst subject, or simply learning how to finally ride a bike as a younger child. Even though these are smaller things to feel triumph over, you still feel as if your world has brightened up, and although you felt this happiness, the people in Jamestown weren’t blessed with quite this great overload of bliss as the tragedies of colonization in Jamestown, first, took them down.
What is a tragedy?
When the word tragedy comes to mind you may not get the best feeling as the noun carries a misfortunate event. It also carries along the possible feeling of loneliness or that you’re alone. The people trying to create colonization in the new world witnessed plenty of tragedies one knowingly being the ultimate loss of a loved one and without them even knowing the ultimate loss and sickness of themselves.
Background info on attempts on settlement/ why settlement was so important in relation to Great Britain?
After the discovery of the New World by Christopher Columbus, and the failure of Roanoke, the first attempt at colonization, the desire to colonize was sparked again. King James the first, in 1606, granted a charter to the Virginia Company of London, a joint stock company wishing to benefit economically off the new, discovered area, to begin the New World colonization. The king saw this expansion as an opportunity to build his country’s wealth, by trade, bring them more power, search for the rare minerals of gold, and benefit from mercantilism, supplying England with new raw materials only produced in North America.
What is Jamestown?
Jamestown was the first successful attempt at colonization in the New World for North America. This colony then, became the base for The United States as even more people felt the need to break away from the King of England and the rest of his country.
Triumphs of Jamestown
1) tobacco: In the year of 1612, the colonists of Jamestown began to make large amounts of money after investing in the seeds of tobacco. They had been introduced to this cash crop by John Rolfe, a colonist recently stuck on the island of Bermuda. He thought the crop would bring the settlement money and dig them out of their struggling times; he was right. The want for tobacco at the time was strong and not only that, but multiple countries were interested in smoking tobacco. John Rolfe was also a big help to the Jamestown colonists because he was married to Pocohantas, the daughter of Powaton, the chief of the Virginian Native Americans.
2) first successful settlement
3) first representative gov.
As the colony was at a huge struggle point on its unknown way to success, the Virginia Company of London thought that, because they feared the downfall of Jamestown and the loss of their desired profit, they would try to grow the settlement’s population and attract more people to their colony by providing them with something they couldn’t find in England. Sir George Yeardley, the governor of Virginia, following orders by the Virginia Company of London, instructed the colonists to build a new form of government. About twenty to thirty colonists came together for their first assembly to create the first form of representative government in the colonies called The Virginia House of Burgesses. This document provided the colonists with more say in their government and made sure the colonists voices were always heard.
John Smith
Tragedies of Jamestown
Starving Time: Not too long after Jamestown was formed in 1607, the colonists ran into a dead end as a time of discouragement fell upon the new settlers. A want for food and no one motivated enough to make it almost threw the colony of Jamestown to dust. This period of time from 1609 to 1610 was known as Starving Time. Colonists were so desperate for food that some even transferred to cannibalism, eating anything they could. JOHN SMITH: This time was unacceptable in the eyes of John Smith, an Englishman, working with the Virginia Company of London, the sponsor for Jamestown, and one of the first council members of Jamestown, as he saw the hungry and careless colonists of the Jamestown colony. Smith enforced power by telling the colonists that they who “will not work shall not eat.” His famous quote motivated the colonists to work, ultimately contributing to the growth of Jamestown and the resolution to their hunger.
Native Americans:
Geography of land: The colonists of Jamestown had no experience settling so therefore didn’t know how to pick the location for their settlement. All they had was the instructions and rules from the Virginia Company of London and their mission to protect and defend their fort. Therefore, colonists chose land on a peninsula to make it easy for them to receive supplies from England and later, help with trade. Their location was ideal in all ways to them as they also stayed away from the Native Americans and the Spanish army, but little did they know that their final area of settlement was not good in terms of their well-being. The colonists what seemed so great peninsula idea wasn’t looking too good after finding out that loads of malaria infested mosquitos found the water as their home and saw that the sea water flow from the water was also unbearable, making it difficult to find the necessary water they needed to reach a successful, stable colony.
Cite this page
Failure and Success of Jamestown. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Learning from Failures
Yesenia is my sister and the oldest from my siblings. My sister accomplished something that has not been accomplished in my family and that was receiving her high school diploma. Many don't realize that receiving your high school diploma can open few doors and receiving a diploma/degree from college opens multiple doors. My parents were of proud of her for receiving their diplomas. My sister had a more difficult time getting where she is now. She ended in the same path as my mother, ended pregnant at a young age but she didn't let her situation get in her way of receiving her education. My sister got more motivation from her daughter to give her a better life. She had to be a mother while being a student also and it wasn't easy but she had support from her husband and family. My sister is a role model for me. She has always tried to steer me to the right path and has always been there for me whenever I needed her.
Yesenia is currently a medical assistant working in a clinic working with patients who are fighting cancer. My sister received scholarships and she set high standards for my siblings and I. Throughout elementary and middle school I have always been an excellent student and loved school. The beginning of freshman year I was under a lot of pressure to achieve more than what my older siblings achieved. I was always the one that received better grades and always received certificates from teacher. My parents never thought I would have any problems in school. Growing up my parents would always brag about me to their friends and family. Not only did my parents set high expectations for me but family and friends were expecting me to excel as well. I felt like I couldn't disappoint if I kept being the student I was however, I was a disappointment for my parents my freshman and sophomore year. I also set a bad example for my younger brother. I started skipping class, giving attitude to my teachers, and failing my classes. My family was very surprised by my behaviour because I have always been the type of student that would do my work quietly and be shy around others. My attitude continued throughout freshman and sophomore year. I would have parent meetings with my counselor but I still continued with my behaviour. I received probation and had to attend court towards the end of freshman year.
During my freshman and sophomore year I simply didn't care about where I was going and gave up on my goals for my future. I was scared that I wouldn't succeed my family's expectations so I just gave up. I would carry this anger towards my siblings who had already graduated which did not help with my relationship with my older brother because we have never really ever had a brother-sister bond, it has always been like strangers. My sister and I talked about my behaviour and I couldn't quite explain to her why I was behaving the way I was. I didn't want her to think I was weak or that I was scared of failing. I don't like showing emotions. I know my sister honestly cares for me. I realized she wants me to be someone in life when she invited a few of my cousins that I grew up with over to my house.
My cousins never graduated high school along with most of my family. My sister truly was the first to have graduated. It was sorta of a intervention and they talked to me about why they didn't continue to school and how they regretted it. There was foul language used in the conversation but it was all love. They explained to me that what I was doing isn't going to get me anywhere in life and that it wasn't too late to fix my problem. I'm very grateful that my sister was helping me out because I never realized until that conversation that I was letting my education go to waste and myself. How far can I go without having an education ? Nowhere to be quite honest. For me personally, I have always seen having a education as a foundation by which I can build a career. I want to have a career and show my family that I worked hard towards a better future. I would like to be a role model. I seen and heard stories about young girls who aren't allowed to go to school. These young girls are being forced to marriage at a young age and are worth less than a cow. I have a golden ticket that many don't get to have and I am not seeing the value of it.
Entering Junior year I decided to be the student I was back in elementary and middle school. It was not easy and I didn't have a clue of the difficulties that I would have to overcome along the way. I still struggle to this day on how I became a more productive student. I've made improvements with my grades and attendance that I was released from probation. I gave up my saturdays and school breaks to make up missing credits. I was a different person then and most people get surprised when I have to explain why I attend saturday school, winter intersession and spring intersession .Whenever I'm absent from school I email my teachers asking them for my missing assignments or lessons. I have always loved school but once I entered high school I got nervous and scared.
I had these high expectations from my family and I carried this fear that I won't be successful in school. I learned that I shouldn't let my fear of what could happen make nothing happen. I still have to work on a fear of mine which is public speaking which is something I struggle with. It has affected my grade in my english and spanish class but I been working on it. I also made improvements on my life outside of school. There is a program I will be joining that my probation officer recommended me to take. The program helps you stay on track for graduation, set up job interviews (both fake and real), how to dress for job interviews, how to fill out a resume, etc. I regret the decisions I made entering freshman year, all I can do is learn from my mistakes and move forward.
Cite this page
Learning From Failures. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
The War Agaisnt Heart Failure
The human body is an amazing creation, probably one of the most amazing yet fragile natural machines in the world. Within this incredible machine, lays two core components that make everything else function, the heart and brain. The heart pumps gallons of blood throughout the body nonstop until death, in order for it to functions, while the brain tells what parts of the body to do what, while processing any given information or sensation. Both capable of amazing feats, and yet both extremely fragile and would cause serious life impediments if either were damaged. In this research paper, I will be focusing on the heart, the motor of the human body, and heart disease.
The most well known cause for heart disease would be atherosclerosis. Atherosclerosis is when fat, joined by bad cholesterol not carried away by good cholesterol, causes build up in the arteries known as plaque, and causes the artery to become weak and less flexible, and can prevent blood and oxygen from flowing. Besides, genetics play a large role in heart disease. A person's genetic makeup can help determine how well their body break down fat, their metabolism, how their body produces its own cholesterol, and how well it handles cholesterol. Moreover, the diet of a person can help determine their risk of heart disease. Diets high in trans fat, cereal grains, paired with lack of fruits, vegetables, protein, antioxidants, calcium, fiber, and complex carbohydrates affect how well the body breaks down incoming materials eventually going into the bloodstream forming plaque clots or continually flowing through. Smoking and drinking which also damages the body can also play a role in heart disease; smoking also builds up plaque in the arteries but also damages blood vessels.
Alcohol can also have a negative effect on heart health. Consuming alcohol in large amounts can increase the amount of fat (triglycerides) in the blood. Drugs also can play a role in heart disease. Illegal recreational drugs can damage blood vessels by leaving toxins in it; can cause abnormal heart beats and rhythms, and can even cause lung failure. Cocaine has a large array of heart damages. Cocaine can cause chest pain, heart attack, heart failure, stroke, aortic dissection, abnormal heart rhythms, heart inflammation and enlargement, and blood clots. Unfortunately, gender plays a role in heart disease risk too. Studies show that men are more at risk for heart disease than are women. This is because men are shown to eat more unhealthy foods and have higher stress levels with less social stress relief, which also lead to unhealthier coping habits such as drinking or smoking. Most women do not have the burden of financial maker, and tend to have a higher level of social support to deal with stressful events. Stress does play an indirect role in heart health; stress can cause certain unhealthy habits such as unhealthy eating, drinking, drugs, and smoking.
Improving one's own personal life can definitely lower the risk of heart disease. Avoiding stress or taking breaks that would improve stress levels would help reduce the chance of someone becoming overstressed and going into an outbreak of unhealthy habits that would harm their hearts. Having a healthy diet high in fruits, vegetables, healthy grains, and protein, avoiding food with high saturated fats and such, could help keep blood vessels healthy, prevent blood clots, and allow better blood flow. Exercise is a great way to keep a strong and healthy heart. With a lot of exercise, a person can build up their heart's strength, which allows it to pump blood throughout the body easier and also pushes through and clears plaque buildup as opposed to hearts with weaker hearts. Avoiding the use of smoking devices can avoid the problem of smoke damaging blood cells and plaque buildup from smoke. Limiting the amount of alcohol consumption helps lower the risk of a rise of fat (triglycerides) in the blood. Avoiding the use of illegal recreational drugs, especially cocaine, avoids many of the risks of intoxicated blood vessels, abnormal heart beats and rhythms, lung failure, chest pain, heart attack, heart failure, stroke, aortic dissection, abnormal heart rhythms, heart inflammation and enlargement, and blood clots.
While we may be able to prevent heart disease by maintaining healthy life styles, there are times where heart disease is unavoidable thanks to genetics and other uncontrollable factors. This is the point where surgeries or medications come into play. There are certain medications that are used, by doctor's recommendation and prescription, to help combat heart disease. Those with cardiovascular diseases and lifestyle change does not seem to be enough, a doctor could recommend medication that lowers blood pressure such as beta blockers, blood thinning medication such as aspirin, and cholesterol lowering medications. Those with irregular heartbeats could be recommended by doctors not only to use heart rate slowing or raising medication, but also maybe certain exercises that would help regulate heart rate such as straining while holding their breath, dunking their face in cold water, and coughing. Those born with heart defects can take medication that would help gradually improve the efficiency of their heart as they get older. Those with heart muscle deterioration could take medication that improve the heart's pumping, or medications that make it easier on the heart to pump blood with less strain such as beta blockers. Those with valvular heart diseases would take medication similar to those with cardiovascular disease, medications that lower cholesterol; thin the blood, with an addition of medications that open blood vessels.
There are several different medical procedures people can take in order to prevent and fight heart diseases. During a coronary angiogram, a doctor places a tube in an artery near the heart and puts a dye in the blood, which x-rays detect and show where passages are becoming too narrow. During an angioplasty, a doctor places a tube into an artery and inflates it, so that the artery is expanded, allowing regulation of blood flow and clearing of the passage, also decreasing the chance of another clot. Cardiac Resynchronization Therapy is the inserting of a pace maker into someone's heart in order to regulate their heartbeat that may be out of rhythm. Carotid Artery Surgery is when a doctor opens an artery and removes blockage to improve blood flow. Coronary Bypass is when a doctor surgically removes an artery from a part of your body and moves it to your heart in order to give a new passage for blood to flow to it. Valve Surgery is when a weakened, damaged, or diseased heart valve is repaired, or replaced in order to regulate blood flow during heart contractions.
Cite this page
The War Agaisnt Heart Failure. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Crowd Disasters as Systemic Failures: Analysis of the Love Parade Disaster
In a hot summer of July 24th, 2010, a crowd disaster took place in the annual Love Parade event in the Duisburg City of Germany. The disaster had killed 21 people and left 500 people injured. Authors purpose of this paper is not just to describe about the incident, how it took place instead authors have made explicit analytical description about its safety and prearrangement for possible disasters, for all event organizers. Authors have made study of the Love Parade disaster on the base of different eyewitnesses, close circuit camera records, and YouTube video footages.
The Love Parade was first organized in 1989 in Berlin and continued till 2003 in same city. It was interrupted till 2007 in reason of permit by city, lack of budget and security concerns. In 2007 The Love Parade restarted in Essen City gathering 1.2 million visitors. In 2008 it was organized in Dortmund city. In 2009 again, it was cancelled in security concern. Thus, the love parade came up to Duisburg City in 2010, which turned as unforgettable sorrow and pain.
There are couples of factors that led the love parade festivals to horrible and panic ending. One most important factor was the space of the festival. In fact, the event space was supposed to choose on the base of the expected visitors not on the base of the capacity of space. However, the space chosen was squeezed in 100,000 square meters, between train track on the east and freeway on the west, which was too small for the expected visitors of 1.4 million. On the top of that, the geographic structure of the space compounded to the reason of disaster. The entrance ramp to the event location and exit ramp were next to each other making the route U-turn. When a sea of human with giant floats entering to the event location and exiting, made intersect in a narrow ramp, the crowd became uncontrollable and people ran over stepping on each other and piled up with injured and dead bodies.
One of the notable, additional reason for this disaster was the failure to start the event on time. The organizer started the event with one-hour delay. However, people were already gathered in the open area of the event. When entrance was opened, the inflow interrupted the line and became uncontrollable. This situation accumulated the crowd unrest and contributed to the disaster.
Additionally, organizers failed to predict the future possible incidents. The possible nature and behavior of the event could not be observed in advance. Even, the communication gap between the member of organizers and hearing interruption in phone calls were also the cause to push the event into tragic ending. Because of the crowd, people could not get the right information about what is happening in another corner or another side of the crowd. False rumor made people pandemonium and ran over pushing and stepping and on each other.
Another important factor that caused to disaster is the generation of a collective force from the crowd. When a crowd goes beyond control, one force gets multiplied into multiple forces. In the love parade disaster, this physical idea worked, and the crowd became unrest.
Right after the disaster, the staircase and the crowd movement were indicated as the reason for the incident. However, this study shows that the staircase was just a cause not the effect to contribute the disaster. The crucial reasons are the existence of narrow ramp, organizational failure to predict the possible dark side of the event, and accumulation of unrest force by pushing each other in the crowd.
Cite this page
Crowd Disasters as Systemic Failures: Analysis of the Love Parade Disaster. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
College Failure
I was having a hard time with this subject at first. Then I watched Chelsea (my soon to be step daughter) who had recently graduated High School. Chelsea has been having a hard time adjusting to her classes. I have been trying to figure this out, because I had planned on her helping me through college. Now, I have been the one helping her through this process. At first, I just thought it was complete laziness keeping her from realizing her potential. Then I realized that a number of different factors have been holding her back. I will attempt to identify the causes for college course failure among students. In the end, I hope the causes for this epidemic will help those who are graduating from high school. Those who are going back to school and educators in general.
It is easy to put the blame on laziness. Professors and parents often believe the reason for failing grades is the lack of participation by the student. They could be right in some cases, but more often than not they are wrong. Picture this, a student comes home from school they grab something to eat and then they head to their room. Common sense will tell you that what they should do is start on their homework. The reality is that they start watching youtube or messaging their friends on facebook. I have seen this time and time again in my own house. What if the reason for this is not what it seems?
In high school, I have notice vastly different teaching techniques than what I remember. No child left behind is one of those changes. I think the program started out with good intentions, but made it difficult for a student who wants to transition from high school to college. I saw the general expectations of student work in high school significantly decrease from twenty years ago. Now students just show up and they pass them along, hoping they will eventually do better later on. In my opinion, this is not good for a student who wants to go to college. The education system needs to get better at preparing graduates for an easier transition into college. They should have higher expectations of their students work. I have been in college for half of a semester now. I have seen papers that students are writing in some of my classes. Some of the papers I have peer reviewed looks barely eighth grade level. Yet. somehow they have managed to graduate from high school. This may seem harsh, but I am only reporting what I've seen.
The selection of wrong courses may also attribute to this course failure among students. Students seem to pick classes that their friends are in. They also tend to gravitate towards classes that they may deem easy. The fact is that every course in college has its own level of difficulty. I have a beginning art class that is more difficult (in my opinion) than the computer programming class that I am taking. Students do not take this into account when they are selecting classes. This results in the students hardly understanding the syllabus. When it is time to work on assignments they are completely confused and lose confidence in themselves. After this, the student just gives up and fails the course.
Some people think a hefty working schedule coupled with full time course schedule may also be the cause of course failure. I will agree with this to some extent. I have a full time forty hour a week job. I need to keep this job to support my family. I also take fifteen credit hours of classes which is a full school schedule. So far, I am handing everything pretty well. I can see though, why a high school graduate may have difficulties adjusting to this type of schedule. During high school if these students had job they may have only been able to work 15 to 20 hours a week. Now that they are out of high school, they want to make more money and live on their own. This is admirable but in most cases not advisable. High school graduates have not had the years of experience juggling work, fun and a homelife to be successful in college. They would probably do better to gradually work up to a full time work schedule, as they get more comfortable with college. Another reason for course failure among college students is distractions. It is easy to get distracted with friends or phones.
Cite this page
College Failure. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Failure of Facebook
We are in the middle of a privacy crisis regarding our online information, we may think our information is safe online but is it? Privacy is something we all need and deserve and the possibility that some of our information could be getting into the wrong hands is alarming to say the least. This example of Facebook is just one of the thousands of online companies out there, we trust them but now that trust is being put to the test. People don't use Facebook nearly as much as when it first came out, our generation is into other things like Snapchat and Instagram. Ironically Instagram is owned by Facebook, hopefully they don't follow in their same footsteps.
So what did Facebook actually do with the private information of their users? In their 14 years of existence Facebook seems to have had a problem with keeping certain things private. When they introduced the news feed in 2006 people didn't like how everything they did was being put out there on their friend's news feeds. The feed would give updates about anything you do on the website whether it be a comment or a picture, everyone you have on your friends list could see it pop up in their feed. Eventually people started to get accustomed to the news feed and it's a pretty big part of Facebook's success. Then in 2007, Facebook had their first encounter with advertising privacy issues. Zuckerberg created something so that companies were able to track what a user has bought and then advertise that product to all of their friends sometimes without that person even knowing. After people found out Zuckerberg issued an apology and gave the users a choice to either use it or turn it off.
Finally, the mistake that really put Facebook in a downward spiral was in 2011 where Facebook shared information with advertisers despite them promising that they wouldn't. Third-party apps were able to access nearly all of your information even private posts without user consent. As of March 2018, Zuckerberg finally came out and said that they knew about the privacy issues but did not do much to stop them. He proposed new ways of going about this including more privacy restrictions and something that allows you to see who has access to your information. There's going to be an option that allows you to withdraw certain companies access to your information depending on if you trust them or not.
Some people may be thinking that they don't care who sees their information because they have nothing to hide. This is true for certain people, however the fact that these companies could have been browsing what you like to do and what you post means that they could control what you see to persuade you to buy their product. Also if an employer wants to get to know more about you they could do a quick Facebook search and have access to things that they probably shouldn't know about. This is problematic because it would influence their decision to hire you or not.
Privacy is something that should never be meddled with, everyone is entitled to it and as humans we crave and need it. Facebook crossed a line that made us realize that maybe those private posts aren't so private after all. I'm hoping that Facebook pulls themselves together and create a safe and private platform that would allow users to have a clear conscience knowing their information is protected. Sadly, a great company like Facebook hit this major roadblock which costed them greatly. Facebook will not be viewed the same anymore and I believe that the death of Facebook is imminent.
Cite this page
Failure of Facebook. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Evaluate the Successes and Failures of China’s Ethnic Relations Policy in Xinjiang
People's Republic of China is known as a newly emerged economic power in international arena, which has become a powerful state with great involvement in international cooperation, peace promotion and trade market. Yet, being one of the top countries in the world, it also has its specific region “ Xinjiang “ which is viewed as an ambiguous issue by both Western and Asian scientists. After the introduction of the Chinese government's special policy on the development of this region, Xinjiang which is located in the Western part of China, has turned out to be more populated by Chinese population rather than ethnic minorities who used to live in that area for a long time. After the collapse of the Soviet Union, Central Asian countries became independent, and Xinjiang played a key role in connecting with those neighbouring countries. Thus, the Chinese government started to pay more attention to the western part of the country, developing agricultural sector and natural resources' production within the last decades.
However, such a sudden switch of the economic development direction and active state involvement in region's affairs was unexpected by most of regional dwellers. By saying locals, it is meant that this particular region is inhabited by ethnic minorities, such as Uyghurs, Dunguns, Kazakhs, Mongols, and other nationalities; and after the region's development Han Chinese representatives were also settling in this area, leaving their permanent residences in eastern and central parts of China. However, the first issue to pay attention to was not only economic development or industrialisation, but the educational sphere of the region. As the policy was aimed at changing all structures of the region, the innovations in educational institutions were not exceptional.
The reason why I draw attention to the educational sector is that the young are those who are going to lead the country in the near future, who inherit all knowledge and experience to keep going in the pace of already developed state, making it more prosperous. Hence, in order to prevent any possible consequences of the newly formed mixture of various nationalities, first of all, there should be policies or reforms concerned the educational system. Xinjiang has become a culturally diverse place and needed assistance to enhance cooperation and collaboration of every citizen, taking into account particularities of religious, national and cultural matters. Moreover, viewing the current status quo of Chinese success in economic development and trade (second economy in the world), I suppose that the successes of implemented policies and measures outweigh some negative aspects that are likely to exist due to imperfection and externalities which are natural to human societies.
Different views were also expressed by several scholars who have contributed by writing articles and doing researches in this specific topic of Xinjiang and its educational sphere and general transformation of Western part of China. The articles provide information not only about the ways of policy's implementation and expected outcomes, but also the results achieved within the next several years. Certainly, it is quite difficult to work out perfect strategy with no drawbacks; therefore there are a few disadvantages that appeared during the policy's accomplishment. Although my main argument is more concentrated on the education sphere of Xinjiang, the first article represents a general view of the regional development strategy and it creates a full picture of what the development in the western part of the country is about. Then, the next two articles support the point that education is vital aspect that the government paid attention to and did best to develop and promote good level of education for the growing generation of ethnic minorities.
The first literature is called Ethnic Minorities in China's Official Discourse on the Western Development Project by V. Elena Barabantseva (2009). The purpose of the paper was to assess the consequences of Chinese official development discourse stated in Western Development Program (WDP) launched in 2000 and evaluate localization and citizenship matters by relying on the official sources, including Chinese scholars and experts.
As soon as the project went into force, In the first year after the official launch of the campaign more than 600.000 Han Chinese moved into the Xinjiang Uyghur Autonomous Region (Barabantseva 2009, p.12). So, the government made a decision to attract Han Chinese citizens to come in order to contribute in the welfare of Xinjiang with the help of their knowledge, experience, and professional skills. Certainly, in most cases, such diverse representatives of different cultures could possibly result in interests' collision; therefore, the authorities sought desire to pacify “ or 'harmonize and stabilize' “ the state's orders and hoped that a growing economy will enrich the local inhabitants and security problems and separatist tensions will decline as a result (Barabantseva 2009, p.13). On the one hand, the project gave a green light for the rapid development, opening new doors for those who are interested in improving the region, not splitting society into different nationalities who are allowed to work and who are not. However, on the other hand, there was a newly emerged issue of ethnic minorities being localized, i.e. ethnic minorities who experience difficulties to take part in social and economic modification on the same terms as Han Chinese. According to E. Barabantseva (2009), Ethnic minorities [] are associated with backwardness, underdevelopment, and stagnation among other derogatory characteristics (p.8).
One more interesting point that the author argued about is connected with poverty issue. It is considered that the ethnic minorities are nearly a half of the poor in China (Barabantseva 2009, p.21). However, after having thoroughly investigated this question, it has become clear that indeed Shanxi Province which is officially exclusively Han, had some of the worst economic indicators in the country (Barabantseva 2009, p.20). Hence, it probably may turn out that the WDP program aimed at the growing development level in Xinjiang could have reached partly its goals, supporting this region's development actions. Yet, there are also some disadvantages of the strategy related to the different level of income paid to Han Chinese and ethnic minorities, i.e. despite the fact that both representatives of society live together and share common norms and aims, ethnic minorities get lower wages in comparison with Han Chinese (Barabantseva 2009, p.21).
Overall, the article is successful in accomplishing its main tasks provided in thesis: explanation of the matters of citizenship, localization and mentioning of inequality and poverty issues; revealing of the WDP strategy and its aims, expressed in economic development and unity of multinational population. However, due to the fact that the article was written in 2009, only nine years after the implementation of such global and important program, there is not enough information to find and see results of a longer term than a decade, especially taking into account the fact that such huge projects are known as long-term strategies that are difficult to be accomplished fully within short period of time. Regarding the methodology used by the author, it is a discourse analysis, which is neither qualitative nor quantitative method, but it discusses basic issues by providing evidence and author's reflections. Thus, such approach is suitable for this article, as it goes straight to the points of the strategy and analyzes important matters of the program.
One more work written on the theme related to education is called The Xinjiang Class: Education, Integration, and the Uyghurs by Timothy A.Grose. The paper examined efficiency of the specific policy for Xinjiang middle school-aged students who got an opportunity to study for free during the last four years in eastern part of China. The aim of the program was improve the level of education and skills of Xinjiang students who come from ethnic minorities' families, thus integrating them into one united Chinese society. Although the author's main argument was pro-resistance of Uyghur students to integrate in Han society, there are several strong points that reveal positive consequences on the general education level and good intention of the government to educate the population of its state.
Providing Uyghurs with an inexpensive education has been portrayed in the Chinese media as the CCP acting as a kind parent (Grose 2010, p.7). The government created special conditions to attract and motivate students to take part in such program and learn not only the Chinese language, but also to become more experienced and skillful after having been graduated. The conditions comprised lowered standards on the college entrance exam, adding bonus points and instituting a quota systems at universities in Xinjiang (Grose 2010, pp.4-5). Moreover, there are specific provisions in meal and holidays celebrations created with respect to Muslims society, such as halal cafeterias with Chinese Muslims or Uyghur chefs, and Nauryz and Kurban holidays (Grose 2010, p.6). But, the author also drew attention to the policy seen more as a political mechanism to instill ideas of ethnic unity (Grose 2010, p.6).
The author also provided examples with interviews (as a part of methodology) taken from Xinjiang Class graduates who were mostly Uyghurs and Kazakhs. The interviewees shared their own experience after graduation and in most cases were less satisfied with their studies, claiming that the interaction with Han students was quite uncomfortable, and the language used in speaking among their friends was more Uyghur outside the classroom rather than Chinese (Grose 2010, p.8).
The author assumed that despite positive characteristics of the program, it has failed, but it can be seen through the paper that CCP has implemented pretty organized system for educating pupils for their better skills and knowledge. The failure described by author is expressed in less integration with Han Chinese and low level of satisfaction by graduated students. However, there is one more important factor of necessary integration of students with Han as they are going to work together for improving Xinjiang region and be more tied and united in the future. The article was also written 8 years ago which says that there were only two generations of graduates of Xinjiang Class, and now as the situation in international arena has changed, the outcomes and improvements are highly to happen. The rapid economic growth, international collaboration particularly with Central Asian states could have improved the system and made CCP introduce amendments to replace drawbacks with positive changes.
In comparison to the previous article's methodology, this article has advantage in conducting interviews from first sources, from those who were part of the program and are credible and reliable source of information. Yet, at the same time, the relevance of the data is becoming less important as the status quo of the countries, including political and economic changes, are changing from day to day rapidly.
The last article to review is called Bilingual Education in Xinjiang in the Post-2009 period by Sonika Gupta and R.Veena (2016). The article demonstrates the changes in educational policies after 2009 known for ethnic riots that signaled the government that it should have been reconsidered, taking into account linguistic issues of ethnic minorities at educational institutions. One more work concludes that, although the central government provides suitable and appropriate conditions for educational purposes, there are newly emerging obstacles during studying process which rather disintegrate minorities from Chinese society. Yet, in the paper, there is much more evidence of the official information and measures introduced in favor of ethnic minorities' interests which overweight negative sides of the question. For instance, 2014 Xinjiang Conference with the participation of leader Xi Jinping pointed out the introduction of new conditions in minority-dominated regions and stressed the role of ethnic unity as crucial to developing stability in Xinjiang (Global Times 2014b, as cited in Gupta and Veena 2016). Also, the state proposed subsidies for inter-racial marriages and Uyghurs' migration out of Xinjiang alongside with the improvement of inter-ethnic communication between the two nations (Gupta and Veena 2016, p.8).
The policy amendments introduced after 2009 seemed to introduce improvements in the policies and prevent other riots or misunderstandings in the region. According to the official sources, there were new implementations at schools by training teachers to learn both languages for teaching purposes and more disbursement given to Xinjiang students who enjoyed special conditions while studying at high schools. Concerning an issue about the learning of Putonghua (standard Chinese) in kindergarten, it helps children to be more prepared for school, as they will be able both to study and interact with Han Chinese. Thus, such innovation appears to benefit the children who will be bilingual since the early childhood and probably will not have difficulties with studying on Putonghua and may have certain advantages towards a future career.
Analyzing and comparing two articles, it has become clear that the western part of China remains an important region with its specificity in multi ethnicity and linguistic matters. Yet, the central government has introduced a number of measures and created special conditions to foster the young who wish to study and be more skillful and competitive with Han Chinese. The first two articles have revealed the results of the projects like Western Development Policy and Xinjiang Class by 2009 and the third article showed that despite the creation of good environment for development in Xinjiang, both in economic and educational spheres, there was huge necessity to reconsider and amend the policy due to appraisals happened in 2009 in Urumqi.
However, in spite of some failures that the system experienced, the region overall could reach high rates of GDP and overall budget rise since 2000s. According to news article Xinjiang's economy growth 7.6% in 2016 (2017), The region's GDP reached 961.7 billion yuan ($139.8 billion) in 2016 with continued expansion of the tertiary sector, which made up 57 percent of the region's gross domestic product in the first three quarters. Hence, the overall development and integration of ethnic minorities could have progressed since the beginning of the 21st century and the policies worked well to accomplish main purposes. The State Council Information Office of the PRC provided that By the end of 2016, there were 107 public libraries, 90 museums, 53 galleries, 119 cultural centers, and 1,170 township (community) cultural activity venues, all offering free access to the public (China SCIO 2017).
Cite this page
Evaluate the Successes and Failures of China's Ethnic Relations Policy in Xinjiang. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
My Definition of Failure
Every now and then you run into somebody that will tell you, I'm not afraid of anything, but that is almost impossible. We all come into this world fearing something, whether it would be the fear of snakes, spiders, or even swimming in the ocean. My fear would happen to be failure.
Failure has always been my biggest fear. I've always wanted to make my parents and friends happy with everything I do. I constantly feel as if I have two giant rocks sitting on my shoulders because I tell myself that I have to succeed in every challenge that comes my way. Both of my siblings haven't done that great in life which upsets my parents, so I want to prove to them that they have raised me right. I am going to show them that I can graduate college and obtain a job. A lot of the people I know think that I am going to end up just like my siblings, but my goal is to prove them wrong. But there is also that mental block that If I make one bad choice in life, I am going to blow my chances away.
Expectations have always got the best of me when it came to my parents wanting to make my decisions for me. After I finished my senior year of high school, my parents suggested that I should go to college and major in a scientific field to become a doctor. I honestly felt that I would fail in life if I didn't get a job that landed me a hefty paycheck at the end of every month. But the more I thought about it, the more I questioned myself. I realized that when I get older, I am not going to want to wake up every morning dreading work. That's when I told myself I am going to do whatever makes me happy. I had a long conversation with my parents and told them that I didn't want to be a doctor or a surgeon, but I wanted to pursue a job in the FBI agency field. It is always a pain in the butt when my parents try to make decisions for me. I have realized that they aren't doing it to control my life, they just don't want me to fold.
Another reason why failure scares me is because it is very embarrassing. Nobody likes to fail, but that's just how it is sometimes. I have always been told that you must first fail to succeed, but I just don't understand why? I hate that when people see you fail in something they automatically think you are not smart or talented enough for that certain task. That has haunted me ever since I was a kid. Every time I don't succeed in something, it feels like I get knocked back on my ass and can't recover. I tell myself that I need to get back up and try again, but why? Why try again if I don't have the strength and courage to succeed? Like I mentioned earlier, if I make one bad mistake, it could haunt me for the rest of my life. How could I look my parents in their eyes and tell them that I failed because I didn't do this, or I didn't do that? Would they be ashamed of me? Would they think that I am not capable of reaching my goals?
Past experiences have also scarred me throughout my life when it comes to failure. When I was just a kid my parents had to file for bankruptcy not once, but multiple times. Still to this day they don't think I ever knew about their situation, but I would be up past my bedtime at the bottom of my stairs listening to their late-night conversations. The reason they didn't want me to know was because they didn't want me to worry about the money issue they were having. I assume that's why they want me to go to college and get a degree, so I don't have to struggle like they did. That's another reason why I have always been scared to fail.
I have always been so stressed out over the whole failure thing. I think I have to do this, or I have to do that, to be a happy human being. I just can't tell myself that it's okay to do things the way I want to do them, because I feel like if I do that then I will mess my whole life up. My family and friends have always played a huge role in my life by supporting me in anything I do. If it wasn't for them I wouldn't have hurdled some of the obstacles that have rolled my way. I wake up every day worrying if I'm making the right decisions in life, and that is why failure has always been my biggest fear.
Cite this page
My Definition of Failure. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Introduction to Market Failure
Market failure is an economic situation that is characterized by an insufficiency of goods and services in the market. In another perspective, market failure is defined as a situation in the market where each particular individual decides to make correct decisions in their own rights but eventually, those decisions turn out to be catastrophic for the survival of the whole group. A rather traditional approach as to what market failure is has been a state of imbalance that develops whenever the quantity that has been supplied in the market does not match up with the quantity that is being demanded. Market failure has been argued as the selfishness exhibited in the market. Market failures could happen for a number of reasons but the most common occurring them have been on the basis of public goods and externalities and this could be positive or negative.
Public goods refer to those goods in the market that is unrivaled and are non-excludable. This would, therefore, mean that these services and goods produced under the public goods tag cannot, therefore, be limited in consumption by the producer to the paying customers in the market. National policing and defense are some examples of public good and market failures would occur some consumers in the market exclude themselves from paying for public goods but still continue to enjoy the benefits. Like in the case of national defense, it's a common occurrence for all consumers to enjoy the same share of national defense despite their economic status quo as it is impossible to produce private military for each consumer. (Melberg, 2015)Public goods cause market failure s because most firms would definitely shy away from providing them as they are not a viable profitable option for them and then again, once the product is in the market, the firm real wouldn't be able to prevent these of that good by consumers. This is why the government is the primary provider of public goods and then voluntary organizations follow closely.
Positive externalities refer to those services and goods that accede a third party more benefits. For example, reducing congestion more so in our cities due to cycling would be considered as a positive externality in that particular situation. Externalities whether positive or negative do happen when the actions of one person to affect the wellbeing of another individual in the market. Positive externalities are therefore beneficial repercussions of one behavior in a free market. For example, if I were to clean our estate compound estate daily, my neighbor would definitely enjoy a clean environment where their kids can come out and enjoy (Caplan, n.d.).
Negative externalities are the complete opposite of what positive externalities are. While positive externalities bring about a positive effect, negative externalities have a foul effect of other players within the market. Negative externalities refer to services and goods whose provision in the market who arguably the third party to incur several costs because of that service. For example, provision of cigarettes in the market for the smokers has a ripple effect on passive smokers as they too are at health risk of getting cancer not because of their own actions but because of actions of another person with whom they share the market. The decision and actions in regards to the smoker will definitely affect even those that are not ascribed to smoking (Investopedia, 2018).
Graphically speaking, this particular diagram shows that when social costs are lower than the private costs then negative externality will likely occur. The equilibrium in the market is affected and that would definitely lead to negative or positive externalities.
References
Caplan, B. (n.d.). Externalities. Retrieved October 26, 2018, from Econlib: www.econlib.org/library/Enc/Externalities.html
Investopedia. (2018, February 13 ). How do externalities affect equilibrium and create market failure? Retrieved from Investopedia : https://www.investopedia.com/ask/answers/051515/how-do-externalities-affect-equilibrium-and -create-market-failure.asp
Melberg, H. (2015, June 13 ). What is the relationship between public goods and market failure? Retrieved from Quora: https://www.quora.com/What-is-the-relationship-between-public-goods-and-market-failure
Cite this page
Introduction to Market Failure. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
No Income Tax States – a Success or a Failure
In recent years, much debate has been stirred regarding states with no income tax and whether the benefits outweigh the income tax system already in place. The American Legislative Exchange Council reported that, as of 2018, states with no income tax have outperformed those that have it. This can be seen primarily in job creation and population growth. Michael Mazerov of The Center of Budget and Policy Priorities explained that in his research he has found no compelling evidence that states without income tax are outperforming those with a tax. He correlates that a major contributor might instead have to do with family reasons and job opportunities within specific fields. With many contradicting viewpoints on the subject of states with no income tax, this paper will try to explain why something that sounds so beneficial can subtlety relocate these taxes in to other state taxes. Benjamin Franklin once stated that in this world, nothing can be said to be certain, except death and taxes. In American history, taxes have always been a point of frustration for many. A major pre-Revolutionary war quote no taxation without representation expresses the angst of the colonists' attitude toward unfair taxes. From the time the Constitution was enacted till 1840 there would be no state income tax applied.
Starting in 1840 many states, mostly southern, would begin to apply state income taxes, some taxing all income and others only taxing partial income. These taxes were brought on by Jacksonian Democracy and the Panic of 1837. During the American Civil War and the Reconstruction Era, the country would see a rise in income taxes in many states in both the United States of America and the Confederate States of America. Many states would continue to add and abolish these taxes until the Great Depression struck in 1929. During this time, a third of the states would have enacted a state income tax. After World War II ended, states would continue to slowly add state income taxes with South Dakota and West Virginia being the only two states to have it abolished. Recently, several states have contemplated getting rid of state income taxes due to the recent recession in the late 2000's. Currently there are nine states with no income tax: Alaska, Florida, Nevada, Tennessee, Texas, Washington, Wyoming, South Dakota and New Hampshire. One major question that many have about no income tax states is: how do these states get enough money to run their governments without collecting an income tax? The answer is different and slightly confusing for each state.
Many states look at the revenue that can be brought in with taxes and whether each tax is viable enough to support applying it. Wyoming is a great example. With its population being just over half a million, an income tax would make a very unlikely option to supply enough money to run a state government. Instead of a state income tax, Wyoming relies heavily on severance taxes and mineral royalty taxes due to its abundance of natural resources within the state.With a small population and a lot of open land, the state of Wyoming does not have to concentrate its government resources on highways and hospitals making not having state income taxes a more viable option for the people of the state. Another great example is Nevada with its major money-making contributions coming from sales taxes and gaming taxes. With it being a major tourist destination because of the many casinos and gambling opportunities, Nevada's lawmakers know that having a state income tax would not be as viable an option. Also, without a state income tax, many who may want to be in the gambling industry or who are fans of gambling would be more likely to relocate to a state with no income tax and little restriction on making money from gambling. These two examples show that no two states are alike. Each state has a different population, different resources and different sources of income.
It is up to each individual state to know what type of revenue it best generates and apply the proper taxes associated with those revenues. There are two specifics exceptions to the typical states with no income tax which will now be presented. The first example is Florida. Having the advantage of fair year-round weather, it has had a major influx of retirees flocking to the state. One major reason for this influx is that Florida is not taking out any state tax on retirement income. For many retirees and elderly residents, this is important to ensure that their later years of life will be spent on more than just paying taxes. Florida's Constitution prohibits the collection of any sort of state income tax and, with that being said, residents can rest assured that there will be no change to this in the near future. On the other hand, one thing that is a positive about living in Florida amid its beautiful weather can also become a negative: homeowners' insurance. Due to the constant threat of hurricanes in Florida, it has the highest home insurance cost in the country. This high home insurance expense is a major reason why Florida's state government can afford to not have a state income tax. Even with the expensive cost of home insurance, people like professional golfer Tiger Woods still find it profitable to live in Florida. In the year 2010, fellow professional golfer Phil Mickelson expressed his dissatisfaction with California's state income tax. He stated that the state of California took out 60% of his income in taxes.
Golfer Tiger Woods showed rare sympathy explaining that this is the reason he moved from California to Florida when he turned professional. To get a better understanding of his complaints, one should look at taxes enforced in Mickelson's home of Rancho Santa Fe, California and Wood's home in Palm Beach, Florida. As of 2017, not only does Mikelson pay a 13.3% income tax as opposed to Tiger's 0% income tax, but he also pays more in sales and property taxes. Though looked at as an elitist, Mickelson does have a point when it comes to some of the advantages of moving to a no income tax state. What he should consider is paying the highest homeowners' insurance rate in the country on what I imagine will be a fairly large and expensive mansion might equal what he will not have to pay in state income taxes. Other no income tax states have to use other means of persuasion to lure people to move to their states. One that might have the hardest time is the cold, harsh climate of the state of Alaska. With it having typically extreme cold weather, Alaska had to be creative about ways to get people to populate its state. A major revenue source for Alaska is its oil pipelines making it possible for its state government to not depend on a state income tax.
Due to the success of their oil pipelines, a Permanent Dividend Fund makes it possible for Alaskan citizens to earn a check from the state for around 700 to 1,000 dollars depending on the price of oil for that specific year. These factors alone make Alaska one of the most no income tax friendly states to live in within the United States. With the success of Alaska's oil pipelines, the state has attracted many workers. It has also seen a slight influx from the retirement community. With the ability to not have their retirement income and social security touched they also have the ability, depending on their age, to have state municipal taxes taken off for the first 150,000 dollars of the assessed value of their home. Even with all of these positives, Alaska still finds it difficult to get people to move to its tax friendly state. For retirees, the constant cold and limited access to medical care makes it a risky endeavor to move there. Also, Alaska puts a high tax on vehicle rentals at ten percent and recreational vehicles at three percent. Local levied taxes are applied between 1 and 7 percent on raw fish, hotel and motel stays, severance, liquor and tobacco, gambling, tire and fuel transfers.
With all of these additional taxes, it is apparent that even with the advantages of no income taxes, one will always find those taxes reproportioned in other taxes that are essential to cover a state's daily living expenses. Along with Alaska's climate, all of these extra taxes imposed will more than likely affect the residents on a daily basis. Like Wyoming, South Dakota has a similar sprawling landscape. Also, like all the other states with no income tax, it funnels what would be collected from those to other taxes. South Dakota primarily relies on a high sales tax revenue. As of 2010, its general sales tax accounts for 56.9 percent of the state's revenue.To add to the high sales tax, residents of South Dakota can also plan on paying higher property taxes that are slightly above the national average. Another state that relies heavily on high sales and property taxes is Tennessee. It also leads the country with the highest combined sales and local tax rate. Tennessee's sales tax alone is the highest in the country at seven percent. When combined with the local sales tax, it can rise to 9.45 percent. To put this into perspective, 9.45 percent is more than double the combined rate in the super touristy state of Hawaii. What many forget when thinking about a state's sales tax is that anytime you go out to eat, the sales tax is added to your food order.
This taxation makes it even harder for the general population because many who have the lowest income are the ones who suffer the most due to the fact that food is an essential to being able to survive. If people who make less have to spend more on essentials that means that they are in danger of not being able to eat what the average person needs to survive. Another tricky thing with a no income tax state like Tennessee is that the cities along the state borders have companies that employ many residents from Georgia. This leads to a whole new set of problems for individuals who live in a no tax state and must compete for jobs. Chattanooga, Tennessee is a city that borders Georgia. With it being so close, many Georgia residents work in Chattanooga. What many people don't realize about working in a no tax state is that you still have to pay an income tax for your state. When you or your company do not take out that tax, you are left with a hefty payment at the end of the year depending on how much you have made within that year. Another complication could be with owning a home in more than one state. This could leave someone paying income taxes for both states which for most would be hard financially. Many get around this by listing the state with the lowest income rate as your primary place of residence. These rules can be a little different for individuals who are in the military. Serving in the military, you may live in many states and countries within any given year. This can also mean living on a military base for a duration of the year.
This can obviously be very confusing, so the government makes those who are in the military pay taxes to the state they lived in when they first enlisted. For some, this can be a great benefit due to some states not taxing military pay. Many studies have shown that no income tax states are growing at a faster rate than states with an income tax. From 2008 to 2013, Texas gained one million jobs while California last jobs. () The Bureau of Labor Statistics survey on employment found that the exact number was 1.8 million for Texas with California losing five thousand jobs. () Over that same period, it was found that New York lost over two hundred thousand jobs while Florida gained twenty-nine thousand. () Other experts disregard these numbers by stating that it is simply a coincidence because of the climate, cheaper housing and higher wages for some residents. () It is also important to point out that one of the most successful places in any state is Silicon Valley. Not only is it thriving but it is a huge contributor to California's economy at the moment.
This is happening while low tax states like Alaska are currently doing poorly. It is obvious that this job growth would be a huge incentive for states to continue to work on getting more and more businesses to their states with tax advantages. Not only have some states started working on trying to get little to no taxes for businesses but they also know that if businesses are brought in then more people will flock to your state for work. When this happens, you have more qualified workers to choose from that makes communities more economically viable and thriving. A perfect example of this is can be seen in the city of St. Elmo, Tennessee. Fifteen years ago, this was a community that was showing the effects of aging along with the houses and community. After businesses started coming to the area, you saw a rise in young professionals buying houses and fixing them up. After a short time, this led to an initiative to utilize the obvious touristy downtown. Now the area of St. Elmo is very prominent and growing at a rapid rate. As far as the economy is concerned, sometimes it can be like playing the lottery. Depending on the economic state of the entire country, political parties that are currently in control or wars that may be currently active around the world, it is always hard to tell what might happen to a state's specific economy. States that usually thrive, like Alaska, can see oil prices plunge across the country causing a huge deficit in their state budget.
Recently Alaska had a $3.8 billion state budget deficit that made them reconsider their state income tax and sales tax. This should always be in the back of one's mind when considering moving to a state that may look like a homerun but might eventually turn into a strikeout as the economy fluctuates. Nevada is another state that could see a major change in economic value due to sports betting now being allowed in many other states recently. A state that prided itself on being the only state where one could bet on sports and have casinos now has to compete with new states that might have more tourist value. Unless Nevada can up their tourism and offer more than these other states, it may soon see itself considering a state income tax or some other tax to make up for what may be lost by this recent turn of events. In conclusion, no income tax states do have some benefits compared to those states with income taxes. The problem is that many of these benefits are hiding the real taxes that are made higher due to cutting out income taxes. For many states such as Wyoming and Nevada that have other ways of funding their state governments, it makes more sense. However, in states like Tennessee it seems like a sneaky way to put taxes where they do not need to be.
Tennessee is a touristy state with Nashville and Gatlinburg, but to have its sales tax almost double that of Hawaii is hard to fathom. A group of individuals who benefit the most from having no state income taxes are retirees and some military professionals who were living in a no income tax state when they enlisted. People who seem to benefit the least are those with lower incomes. In states like Tennessee where the sales tax is so high, many who need every cent to count will find their grocery bills rising with every piece of food taxed. As history has taught us, taxes will be something that will always be a part of our lives in America to make our state and local governments run. What people need to be aware of when moving to a state specifically for tax purpses is to really research whether no income tax states are really a benefit and not just a way to reproportion taxes elsewhere.
Cite this page
No Income Tax States - a Success or a Failure. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Failure to Properly Treat Inmates who Struggle with Substance Abuse
The California Department of Corrections and Rehabilitation's (CDCR) mission statement asserts, We enhance public safety through safe and secure incarceration of offenders, effective parole supervision, and rehabilitative strategies to successfully reintegrate offenders into our communities (Rehabilitation, 2017). Don't let the word Rehabilitation in their name fool you into thinking this is actually what they are accomplishing with the few drug programs and services in place behind their prison walls. Drug abuse is running rampant in California prisons. Just today I read a news article about San Quentin State Prison and how they have has a spike in opioid overdoses averaging about one overdose a month. The Bureau of Justice Statistics claims that 76.9% of drug offenders get arrested for a new crime within five years of their release, implicating that drug offenders have among the highest recidivism rates of all prisoners (Matthew R. Durose, 2014). The aim of this paper is to examine this problem by looking at the current drug treatment options available in CDCR's facilities, explain why they fail to properly treat drug addictions, as well as to explore possible alternative options.
Addiction is a chronic, relapsing disorder characterized by compulsive drug seeking, continued use despite harmful consequences, and long-lasting changes in the brain. It is considered both a complex brain disorder and a mental illness. Drug rehabilitation is the process of medical or psychotherapeutic treatment for dependency on psychoactive substances such as alcohol, prescription drugs, and street drugs such as marijuana, cocaine, heroin, or amphetamines. Treatment includes medication for depression or other disorders, counseling by experts and sharing of experience with other addicts. According to CDCR's Division of Rehabilitative Programs branch, they provide numerous rehabilitative programs and services to both prison offenders and parolees, to help offenders leave prison so they can succeed in their futures despite past obstacles. All of CDCR's rehabilitative programs are offered on the basis that the offender is serving, or has served, their time on good behavior and if the services are determined to be in need. As I reviewed their website, I saw a list of their programs offered and I was only able to allocate one that was listed as a Substance Use Disorder service. The only other service I saw that was similarly related was called Substance Abuse Treatment and Recovery Program, except that is only offered to inmates who are 4 years from their release date.
To find out more about the type of drug rehabilitation taking place inside CDCR's prisons and their effectiveness, I interviewed two inmates, one which I will call Travieso and the other which I will call Cholo. Travieso is serving his 13th year of his 15 year sentence at California Substance Abuse Treatment Facility and State Prison (SATF). Cholo is serving his 10th year of his 13 year sentence at California State Prison Los Angeles County (CSP-LAC). Between the two of them, they have done time in nine different California prisons including: Richard J. Donovan Correctional Facility, California State Prison (CSP) Los Angeles County, CSP Sacramento, CSP Corcoran, Folsom State Prison, Kern Valley State Prison, Salinas Valley State Prison, Calipatria State Prison, and California Substance Abuse Treatment Facility and State Prison Corcoran. This has given them the opportunity to experience how these facilities differ from each other, witness the drug problems, and see what programs and services are offered to inmates who struggle with substance abuse. I asked them both a series of questions and to no surprise, their answers seemed to echo each other.
Through my interviews with Cholo and Travieso, I quickly realized that the CDCR's website has a way of glorifying their focus and efforts on rehabilitation. From the information I received from these inmates and my research, there is only one drug program offered in prison, and it's called Narcotics Anonymous (NA). From what was described to me, NA is an inmate-led peer support group for recovering addicts, where they share their experiences/stories and follow a 12-step program. There is no real addiction counseling or intervention from doctors or therapists. This class always has a long waiting list to get into, which can be about a year wait or longer. The waiting game often discourages inmates from signing up, unless they have a parole date coming up.
In CSP-LAC, each yard offers two NA classes, each made up of 30 inmates, and lasts one year long. Each yard houses about 1000 inmates, which means that the drug rehabilitative program (NA) can only be utilized by 6% of the prison population per year. In addition to this, once inmates finish their NA program, they go back to their routine prison life, dealing with the drugs and temptations on their own. Prisons do have program counselors, but both Cholo and Travieso say that their counselors don't really care, which makes meaningful treatment difficult. Meaningful treatment cannot be achieved in a place where staff members are also often the same ones smuggling in contraband and drugs into the prison for a nice extra payoff. Staff members whose mission is supposedly to successfully reintegrate offenders into our communities, contradict that with their crooked actions. They are not helping, they are setting them up for serious failure and to be stuck in this toxic addiction cycle.
To find out more about these inmates feelings, views, and opinions on this issue, I asked them, What do you think contributes to the drug addiction problem inside prison? Travieso claimed that prisons lacked programs to keep inmates productive. At SATF they are regularly put on lockdown and Travieso believes that the reason most inmates turn to drugs is because they are bored, depressed, stressed and tired of the same routine of daily prison life with nothing to look forward to but walls. Similarly, Cholo said , emotional distress is the reason why drug addiction is a problem in prisonjust waking up and realizing you're in prison makes you miserable and you just want to find an escape. I took some time to reflect on their words and realized that while in prison, these inmates suffer imprisonment in many different forms: physical, mental, emotional, and spiritual”and they have no one to positively support them, leaving them nothing else to turn except violence and drugs.
After my research and interviews I was able to conclude that the idea of rehabilitation in CDCR's prisons is great in theory, but not in practice. There is a dire need for more programs that support drug addiction recovery inside prisons. The more programs there are, the less waiting lists, and more of the inmate population will have access to self-help. This may include services such as addiction counseling, mental health services, drug detox, behavioral therapy, doctor visits, and more interaction with experts who genuinely are there to rehabilitate. Having regular visits by counselors, psychiatrists, therapists and physicians will provide inmates with a dependable support system and consistency. Instead of continuing to lock down humans that are already locked up, CDCR should give them something productive to do. By providing more jobs or even hobby classes, inmates can have something to look forward to, something to get them out of their cell and temporarily escape their repetitive routine in hell. Art, music, and crafts are just some examples of therapeutic ways inmates can express themselves and relieve physical tension, mental stress, and emotional anxiety. Until CDCR give inmates opportunities to participate in healthier, more positive options, drugs and violence will continue to be the easiest form of dealing with the physical, mental, and emotional trauma of life in prison.
Sustained drug use in prison is explained by the fact that many prisoners suffer from drug addiction and dependency, and their drug taking in prison is seen as a continuation of their addictive and habitual behaviors in the period before imprisonment. I wanted to further explore the perceptions on drug use from the group who uses drugs (prisoners) and the group who tries to prevent, detect, sanction and treat drug use (prison staff) in prison. Prison staff emphasizes drug addiction and prisoners troubled life trajectories when explaining in-prison drug use. Prisoners, on the other hand, explain that drug use can be a way to alleviate some of the pains of imprisonment; an integral part of social life in prison; a route to status in the prisoner community and a defiant way to subvert institutional rules and expectations. Imprisonment as such does not appear to motivate prisoners to reduce their drug taking behavior in prison.
It is argued that the extent of drug use in prisons derives from a complex interaction between factors such as availability, drug control strategies and the drugs psychopharmacological properties (Mland, 2016). Marijuana and heroin are the most commonly used prion drugs because both drug types are reported to aid relaxation, relieve stress and counteract boredom. More specifically, heroin's attraction lies partly in the way that it kills time, alleviates anxiety and allows the user a temporary escape from reality. The use of these drugs makes prisoners better able to cope with the hardships of imprisonment. Drug use is thus interpreted as a form of self-medication and as a coping strategy.
One thing both parties can agree to is that illicit drugs are running rampant in our prison system. The availability of these drugs inside prison walls becomes an inhibiting factor for rehabilitation. How can you stay away from drugs when it's surrounding you? In addition to that, there is not just drug addicts in prison, there are also drug dealers. Drug dealers in prison are prone to using various tactics to preserve the number of drug users within the institution to maintain their lucrative business and keep their profits high. This type of peer pressure can be coupled with intimidation, threats, and violence as well.
These inmates are confined to their tiny cells for most of their days, which often times can be tedious and boring. This can cause psychological deterioration as well as physical soreness, aches, and pains that can develop from being limited in movement due to their limited space. Knowing that you have to serve a sentence in prison is sometimes a harsh reality to accept and not all inmates know how to just do their time productively. Inmates that go to school and work inside the prison at least have something going for themselves. They are able to get out of their confined spaces for a few hours. When this is not the case, inmates will seek out drug use in order to provide them with a sense of purpose to their day.
In conclusion, research and studies done on drug use in prison has been correlated back to both the deprivation and importation models. The deprivation model implies that drug use might well be a means of adapting to the pressures of confinement, and adaptation that might occur independently of outside prison influences (Woodall, 2010). This implies that a substantial number of inmates become involved in drug use only after they have been confined. The importation model would create a direct link between pre-prison experience and prison behavior (Woodall, 2010). The implication with regard to drug use would be that drug use inside the prison is associated with influences outside the prison, whether that be pre-prison involvement in drug use, or the way they were brought up and other life experiences.
Cite this page
Failure to Properly Treat Inmates Who Struggle With Substance Abuse. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Semi-structured Interviews with Clinicians
METHODS
Study Design
We conducted a qualitative study using semi-structured interviews with clinicians at a large academic medical center in the Northeast. Approval to conduct this study was obtained from the Dartmouth- Hitchcock's Medical Center's (DHMC) Committee for the Protection of Human Subjects.
Participants and Recruitment
We used a criterion-based sample of clinicians (n=58), including attending surgeons, house officers, clinical nurse leaders, respiratory therapists, registered nurses, and nurse assistants. The units where interviewees worked varied by several measurable characteristics, including primary surgical specialty, number of inpatient beds, and acuity of patient population. We purposefully sampled by clinician type, years of experience, and patient population to maximize diversity and generalize findings to a wider demographic. The hospital's site coordinators and nurse managers on the unit helped identify potential participants for the study based on interdisciplinary team composition. The study team contacted individuals directly for participation. We obtained consent from participants to record and transcribe their interview.
Data Collection
Interviews were conducted between February and June 2017. A registered nurse trained in qualitative interview skills conducted all interviews, each lasting between 30 minutes to 1 hour on average. A semi-structured interview guide was developed based on a conceptual model developed by A.G., grounded in concepts of high reliability organizations and adapted from a 2014 study by Wakeam et. al..(8). The goal of the interview guide was to understand not only provider perceptions around rescue, but organizational influences on the rescue process and how that is conveyed to staff. The interview guide explored positive and negative influences of rescue (see Appendix A). All interviews were audio recorded and transcribed verbatim.
Data Analysis
Interview transcripts were managed and analyzed using NVivo 11 qualitative software (QSR International). We conducted thematic analysis and consensus coding of verbatim transcripts. Analysis began with independently reading, highlighting, and annotating transcripts to identify patterns. We refined these patterns into a systematic list of codes (codebook) to describe the content of each data excerpt. After this initial development, the codebook was applied to a new transcript, and again refined by author consensus. After three iterations of codebook refinements, we reached thematic saturation as no new themes or patterns emerged. All transcripts were coded using the final codebook. We used a Framework Matrix approach to synthesize our coding and identify overarching themes that facilitate or impede the rescue process (9).
Cite this page
Semi-structured Interviews With Clinicians. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Commitment to Failure
Prof Most People fail, not because of lack of desire, but, because of lack of commitment said Vince Lombardi. Many college students find themselves unready for college causing dropouts of classes or failure in college courses. Many college students are high school graduates who think they're ready for the next chapter in their life, college. College, however, is different from high school. Three main causes to for failure and college dropouts are bad study habits practiced in high school, lack of academic preparedness, and the commitment to put in the work.
One of the causes of failure and dropouts to college is bad study habits. Students in high school would go home to study but there are excessive distractions at home such and television and music. Flickering lights and sounds not only pique your children's interest, they slow down the learning process and hinder reading comprehension (Garcia). The internet and cellular devices might seem like a resource to study but multitasking teenagers who text, listen to music, have Facebook open, and are Skyping with a classmate while doing their homework are less likely to retain information they're studying (Miller). Another example that many people are guilty of is procrastination. Procrastination is when students wait until last minute to do assignments. Procrastination can lead to doing things halfway and not retaining as much information as necessary to ace that exam(Rachel M).
In addition to bad study habits, the lack of academic preparedness is a reason of failing in college. Why is it that students get good grades in high school and high GPAs in college but when taking the placement test in college they score low? Causing a student to take developmental classes in college and learning something that should have been learned in high school. Students come to college thinking it's more or like high school thinking they can fly by without studying, taking notes or asking questions. Another reason for this is that the system failed, teachers and counselors don't prepare a student with the correct material or course that a student should be learning in high school causing them be classes or even a year behind in college. In some schools, councilors just want the students to graduate high school ; instead, of having students stay back another year to learn the subjects students needed or will need in college.
Another reason for failing is students don't put commitment into their education as they should. There are multiple reasons for commitment that can interfere with college such as having job. Most college students attend school and work a part-time for full time job which is considered a distraction. Students either miss school or stay up late because of their job, making it harder to commit to school. Another reason for lack of commitment is personal issues with the family, friends, spouses, or children which can become a huge distraction and lack of commitment with students. Most students drop out or even fail college due to the bad study habits, lack academic preparedness and commitment towards school.
References:
Garcia, Lori. 10 Bad Study Habits And How To Fix Them. Babble, 8 Oct.2018, https://www.babble.com/kid/10-bad-study-habits-and-how-to-fix-them/
M,Rachel. Extremely Bad Study Habits Of College Students. 8 Oct. 2018. https://www.iwriteessays.com/college/extremely-bad-study-habits-of-college-students
Miller, Tom. 5 Bad Study Habits You've Probably Been Following. 8 Oct. 2018. https://www.lifehack.org/articles/productivity/5-bad-study-habits-youve-probably-been-following.html
Cite this page
Commitment to Failure. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Synthesis of New Solid Forms of Bumetanide and their Characterization
1. INTRODUCTION
Bumetanide (BUM) is a loop diuretic of the sulfamyl category, (3-(butylamino)-4-phenoxy-5-sulfamoylbenzoic acid) most often used to treat heart failure. Most often used in the people whom high dose of furosemide or other diuretics are ineffective. The main differences between bumetanide and furosemide are the bioavailability and potency. Mostly 60 percent of furosemide is absorbed in the intestine, and these are substantial inter and intra-individual differences in the bioavailability (range 10-90%). About 80 percent of bumetanide is absorbed, and these absorption does not change when it is taken with food. It is said to be a more diuretics, meaning that the predictable absorption is reflected in a more predictable effect. Bumetanide drug are 40 times more potent than furosemide for patients with normal renal function. It is sometimes used for weight loss because, use as a diuretic, and removes water from body. The study of solid state chemistry of pharmaceutical solid focuses on all disciplines right from drug discovery to successful marketing. A clear understanding of the molecular structure can lead to a better design and control of the drug performance. Moreover, interest in the subject of pharmaceutical solids stems in the part from FDA's substance guideline that states appropriate analytical procedures to be used to detect polymorphic, cocrystals, hydrated or amorphous forms of the drug substance. Solid forms are usually more stable than liquid counterpart. The main factors which affect the oral route absorption are aqueous solubility, physical/chemical stability, and permeability. The fundamental parameters, i.e. aqueous solubility and gastrointestinal permeability which define oral drug absorption are used as the basis for the BCS classification scheme. These biopharmaceutical classes are defined as below figure.-01 Solubility and permeability parameters defined by FDA and according to it, 'A drug substance is considered highly soluble when the highest strength is soluble in 250 mL or less of aqueous media within the pH range of 1- 6.8 at 37 ?± 1?°C' and 'A drug substance is considered to be highly permeable when the systemic Bioavailability or the extent of absorption in the humans is determined to be 85 percent or more in to the administered dose based on a mass balance determination (along with evidence showing stability of the drug in the GI tract ) or in comparison to an intravenous reference dose'.
1 Bumetanide is the diuretics agent, belongs to BCS class II. A survey by Lin et al.3 revealed that almost 40% of marketed medicines and 90% of emerging new chemical entities suffer from poor solubility which allows them to be eliminated from the gastrointestinal tract before they get completely dissolved and absorbed into the blood circulation, thus results in low bioavailability, less potency, and higher dose strength. For these classes of drugs, higher dose strength would be required to ensure therapeutic concentration in blood but it may lead to adverse or toxic effects on the recipient's body. Thus, new solid forms of the API, which may overcome these problems without affecting the chemical nature of it, greatly reduces the risk, time and cost in development.
1.1. Types of solids forms: Longer range order shorter range order Crystalline Amorphous Liquid crystalline Single component multiple component Ionic non-ionic Polymorphs Salt Molecular adduct Solvates/Hydrates Cocrystal Solid is one of the four fundamental states of matter (the others are liquid, gas, and plasma). Solids molecules are closely packed. These are characterized by structural rigidity and resistance to changes in the shape or volume. Unlike a liquid, or a solid object does not flow to take on the shape of its container, nor does it expand to fill the entire volume available to it like a gas does. All atoms in a solid are tightly bound to each other, either in a regular geometric lattice (crystalline solids, which include metals and ordinary ice) or irregularly (an amorphous solid such as common window glass). Solids cannot be compressed within the little pressure whereas gases can be compressed within little pressure because in gases molecules are loosely packed. In materials science, polymorphism is the ability in a solid material to exist in more than one form or crystal structure. Polymorphism can potentially be found in the any crystalline material including polymers, minerals, and metals, and is compare to allotropy, which refers to chemical elements. Most pharmaceutical molecules are polymorphic. Polymorphism is an ability of a chemical compound to crystallize depending on the crystallization conditions in different crystal structures alias polymorphs. Molecules in the crystal structure of a Polymorphs are bonded by weak interactions (H-bridges, Van-der Waals forces, ??- ?? interactions). Amorphous materials have an internal structure made up of inter-connected structural blocks. This blocks can be similar to the basic structural units found in the corresponding crystalline phase of the same compound. Whether a material is liquid or solid depends upon the primarily connectivity between its elementary building blocks so that solids are characterized by a high degree of connectivity whereas structural blocks in fluids have lower connectivity. In pharma industry, the amorphous drugs are shown to have higher bioavailability than their crystalline counter parts due to the high solubility of amorphous phase. Moreover, certain compounds can undergoes precipitation in their amorphous form in vivo, and they can decrease each other's bioavailability if administered together.
1.2. Crystal Engineering: Crystal engineering developed over the past 65 years as a natural outcome of the interplay between crystallography and chemistry. Chemistry deals with the molecules while crystallography deals with regularly arranged molecules. The term 'Crystal engineering' was coined by pepinsky in the year 1955. According to pepinsky, crystallization of organic ions with a metal-containing complex ion of suitable sizes, charges and solubility's result in structures with cells and symmetries determined chiefly by packing of complex ions. These cells and symmetries are to be good extent controllable; hence crystals with advantageous properties can be engineered.'' Crystal engineering is an interdisciplinary area in chemistry, which bridges, chemistry and crystallography. In the present decade, research is mainly focused on controlling the directionality and strength of intermolecular interaction in the design of molecular crystals. A. I. Kitagorodskii gave the definition of the molecular crystal which state that, within a molecular crystal, it is possible to identify groups of atoms such that for every atom of a group, at least one inter atomic distance within this group is significantly shorter than the smallest inter atomic distance to an atom in another group.
4 He invoked a question, molecule to crystal, he stated that packing of molecular solids was largely governed by considerations of size and shape, the so-called principle of packing. Today, X-ray crystallography is a matured science and has a far-reaching impact on material characterization. The structural insights obtained from crystal structure analysis led to unprecedented developments in electronic devices, mineralogy, geosciences, material science, and pharmaceuticals. Detailed knowledge of accurate structural information of active pharmaceutical ingredients (APIs) is a prerequisite for rational drug design and synthesis of new chemical entities for the development of new medicines. As research progressed with time, the focus shifted to multicomponent molecular crystals (cocrystals). The knowledge obtained from the analysis of the crystal structures is used in the selection of conformer for cocrystal design. A successful cocrystal synthesis requires the understanding of supramolecular synthons that occur most frequently between the functional groups of cocrystals components. Hence, the design of cocrystals for a given molecule starts with analyzing the functional groups available on that molecule and finding complementary functional groups which would likely form predictable supramolecular synthons. Thus, the conformer selection in a cocrystal design strategy reinforces a more significant role of the knowledge of intermolecular interactions which is often drawn from X-ray crystal structure analysis. In a crystal, molecules are associated with a specific pattern of non-covalent interactions such as hydrogen bonds, halogen bonds,5 and ??-stacking. Over the past century, single-crystal X-ray diffraction has proven to be an important tool for unambiguous determination of crystal structure, and thus, assisted in ground-breaking analysis of material properties. Concerning cocrystals, structural characterization (a) establishes the reliability of cocrystal design strategy, (b) reveals hydrogen bind preferences of the functional groups, and (c) provides insights into structure-property correlation. Crystal engineering is a concept of great application and scope. Presently, it focuses on more practical applications such as pharmaceutical cocrystals and high energy materials, etc. Fig.1. Crystal engineering involves modification Of the crystal packing of a solid material by changing the intermolecular interactions.6 1.3. Non-covalent Interactions: Non-covalent interactions are ubiquitous in chemistry and are the primary source of stability for many molecular complexes in biological, pharmacological. Chemical, physical, and material sciences, etc. while traditional chemistry focuses on the covalent bond, crystal engineering and supramolecular chemistry exploit the non-covalent interactions which hold molecules together in a crystal lattice. These forces include hydrogen bonding, metal coordination, hydrophobic forces, Van-der Waals forces, ??-?? interactions and electrostatic effect.
1.4. Hydrogen Bond (strong and weak): Hydrogen bonds are electrostatic and play an essential role in stabilizing the molecular aggregates. Nernst first introduced the phenomenon of hydrogen bong formation in 1891.7 Bernel and huggins proposed the term Hydrogen bond in the year of 1935-36.8 In the year of 1939, Linus pauling defined the hydrogen bond as under certain conditions, an atom of hydrogen is attracted by rather strong forces to two atom instead of only one, so that it may be considered to acting as a bond between them. The more elaborative and expanded definitions were given by many scientists as the increased interest of research in this field, which include Pimentel and McClellan (1960),9 IUPAC stated that the hydrogen bond (designated as D-H---A, where acceptor A and donor D are electronegative atoms) is an attractive interaction between a hydrogen atom from a fragment or molecule D-H in which D is more electronegative is that H, and an atom or a group of atoms A, in the same or different molecule where there is evidence of bond formation. Depending on the nature of the donor and acceptor atoms which constitute the bond, their geometry, and environment, the energy of a hydrogen bond can vary between 1 and 40 kcal/mol. The hydrogen bond is not a simple interaction but a complex conglomerate of at least four component interaction types: electrostatics (acid/base), polarization (hard/soft), van der Waals (dispersion/repulsion) and covalency (charge-transfer). It is neither a strong van der Waals interaction nor a weak covalent bond. It is not even a strong directional dipole-dipole interaction. For geometrical parameters of hydrogen bond see figure-2. This interaction characterized through X-ray diffraction, neutron diffraction, NMR, FT-IR and RAMAN spectroscopes. Three types of hydrogen bond exist very strong, strong and weak(table 1.1).
11 Because of this dual nature (very strong/weak), it attract many scientists all over the world. Hydrogen bonds are electrostatic interactions, but the proportions of electrostatic character can vary. A more expanded proposed definition of a hydrogen bond is as any interaction X-H---A with a shallower energy/distance dependence should be termed as a hydrogen bond. Table No.01: Classification of Jeffrey11 for strong, moderate, and weak hydrogen bonds (the numerical data are guiding values only) Strong Moderate Weak interaction type strongly covalent mostly electrostatic electrostatic/dispersive bond length [?…] H---A 1.2-1.5 1.5-2.2 >2.2 lengthening of X-H 0.08-0.25 0.02-0.08 130 >90 bond energy [kcal/mol] 15-40 4-15
Cite this page
Synthesis of New Solid Forms of Bumetanide and Their Characterization. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Molecular Characterization of NS-1 with Robust Growth and Enhanced Phenotypic Properties
Basic research for neurodegenerative disease largely relies on cell line studies, which also act as the primary platform for drug screening in translational medicine. Neuroscreen-1 (NS-1), is a sub-clone of PC12, a noradrenergic cell line that shows dopaminergic properties are widely used in neurobiology, neurotoxicology, and drug discovery studies [1,2]. 1,2
Differentiation of PC12 by NGF is well documented. NGF treatment on PC12 cell leads to cessation of cell division, induction of neurite outgrowth, and production of electrically excitable cells in culture, mimicking the characteristics of sympathetic-like neurons[6] 6. In addition to the neuroprotective and neuro-restorative properties of NGF, dysregulation in NGF signaling has been positively correlated with Alzheimer's disease (AD)[7] 7, epilepsy[9] 9, and cancer[12] 12. AD is characterized by death of forebrain cholinergic neurons, resulting from imbalance in expression of NGF, Pro NGF, TrkA, p75NTR ( Mufson 2008). NGF regulates proliferation and differentiation of neuronal cells via activation of tyrosine protein kinase (TrkA) receptor, and downstream signaling molecules that include Ras/MAP kinase cascade, IP3-dependent Ca2+ release, and PI3K/Akt pathways[13] 13. Furthermore, NGF increases expression of ChAT and VAChT, cholinergic markers are required for cholinergic neurotransmission[14,15] 14,15. ChAT enables the synthesis of acetylcholine (ACh) from acetyl-CoA and choline, whereas VAChT acts as a membrane transporter loading ACh into secretory vesicle and makes it available for secretion[16] 16. Decrease in ChAT and VAChT is suggested to play a role in the progression of AD[17,18] 17,18.
Muscarinic acetyl choline receptors (Chrms) GPCR found in the cholinergic system signal through binding of acetylcholine and are involved in learning and memory[19,20] 20, and thus been implicated in neuronal diseases [22], [23] [24] 22,23. Similarly, adenosine receptors (AR) mediates the actions of adenosine in CNS by release of neurotransmitters and synaptic plasticity[25] 25. Among four members, A1 receptor is prevalent in the synaptic regions whereas A2A receptor is localized in the striatum and olfactory bulb[26] 26. The A2A receptor has been shown to modulate NGF-induced neurite outgrowth (NOG) in PC12 and neuritogenesis in primary hippocampal neuron in association with translin-associated protein X (TRAX)[27] 27.
NS-1 cell line has been used as a neuronal cell model to study the chemical dependent initiation, progression, inhibition, and toxicity in neurite outgrowth assays[2,5,28,29] 29,30. Despite increasing interest in the use of NS-1 as a substitute model for PC12, investigations of alterations in growth and differentiation characteristic has been limited at the molecular level. Understanding traits of NS-1 gene expression in differentiated and na??ve states is critical in defining the representative neuronal subtype for utility as a relevant substitute experimental model for PC12. . In the present study, we compared the gene and protein expression profiles by analysis of neuronal molecular markers by qPCR and immunoblot analyses. We conclude that changes in gene and protein expression could account for enhanced phenotypic properties of NS-1, and that the expression of neuronal markers support the use of NS-1 as an alternative and substitute cell model to PC12.
Cite this page
Molecular characterization of NS-1 with robust growth and enhanced phenotypic properties. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Preparation and Characterization of Melamine-Formaldehyde Crosslinked Acrylic Resins
Abstract
This lab involves the synthesis of an acrylic polymer. The reagents of hydroxyethyl acrylate, styrene, and butyl acrylate were reacted in a semi-batch process The product was then stored for one week before being crosslinked with a melamine-formaldehyde resin and a catalyst. The resulting coating was applied to an Al panel, cured, and tested according to ASTM D5402 for solvent resistance. This process was repeated and the formulation adjusted until a formulation exceeded 200 MEK double-rubs. The formulation was then created in a larger batch and applied to Al, steel, and free-film substrates by drawdown bars. These samples were then cured and characterized using various ASTM methods based on mechanical and chemical properties.
Keywords
Melamine-Formaldehyde, Acrylic Resins, Crosslinking, Characterization, Semi-Batch
Introduction
Acrylic resins were first used in the 1950s in the automotive industry.2 They have since increased their durability and practicality. Acrylic resins are formed by radical chain polymerization. This process has three stages: initiation, propagation, and termination. The formation of a radical is what begins this polymerization. That radical species is then able to react with monomer to produce a carbon radical on the monomer. This new radical can then react with other monomer creating a longer and longer chain of monomer. This polymer continues to grow being driven forward by the reactivity of the radical until it is terminated by either another radical or by other means.2
The objective of this lab is to successfully synthesize an acrylic resin with favorable PDI, then crosslink the polymer with melamine-formaldehyde. This reaction looks to create a coating that will be able to be tested using numerous characterization techniques following ASTMs.
Experimental
The first step in this experiment was the synthesis of the acrylic resin. A semi-batch process was used in this synthesis. Xylene (100.0g) was added to the resin kettle and began heating to 90?°C. A nitrogen flow was then started to blanket the entire apparatus. Special care was taken to not have an excessive flow to preserve the solvent. While the solvent is heating, hydroxyethyl acrylate (15.0g), styrene (82.6g), butyl acrylate (52.5g), and Vazo 67 (3.75g) was mixed in an Erlenmeyer flask until the initiator, Vazo 67, was fully dissolved. This solution was then added to an addition funnel that ran into the kettle of xylene. The monomer solution began being deposited into the heated solvent dropwise. The temperature was monitored constantly. The temperature goal of the addition period was 90-95?°C. To control the temperature, the mantle was turned off and lowered to cool and raised and turned on to heat. This back-and-forth heating and cooling cycle created a predictable and steady heating throughout the addition. The entire addition lasted 56 minutes. Once the entire monomer solution was added, the batch was held at the 90-95?°C for an additional 30 minutes. A chaser of xylene (2.5g) and Vazo 67 (0.5g) was added directly into the kettle after this waiting period. The temperature was then maintained for an additional hour. This complete synthesis was followed by cooling the resin and storing for future use in a sealed jar.
One week after the synthesis, formulation with a crosslinker was performed along with characterization. The first test was determining percent solids according to ASTM D2369. The percent solids was determined to be 51.88%. With this information, a formulation can be calculated with the crosslinker, Cymel 303. The formulation of synthesized resin (10.00g), Cymel 303 (1.30g)(25%), and pTSA (0.026g)(0.5%) was prepared with a drop of the flow aid, Byk 301 in solvent. This was then drawn down on an Al panel that was chemically cleaned with solvent. The drawdown was conducted with a 5 mil bar. The panel was left to flash dry for 15 minutes before being put in a 160?°C oven for 20 minutes. To test the cure, MEK double rubs were conducted according to ASTM D5402. If the coating passed more than 200 rubs without failing, then the formulation passes and can be used for further testing. The initial formulation failed at 50 rubs. The formulation was adjusted twice more with identical formulation as stated above except the amount of catalyst, pTSA, was increased with every new formulation. The final successful formulation contained the synthesized resin (20.09g), Cymel 303, (2.63g)(25%), and pTSA (0.21g)(2%). This formulation successfully exceeded 200 rubs and was used to coat Al and steel panels. A glass panel was also used to attain a thin film and for gloss measurements. These Al and steel panels were subjected to numerous ASTM test methods explained in the Results and Discussion section.
Results and Discussion
The acid number was calculated to be 0.55 using Equation (1).
(Acid Number)=(mL of KOH)(Normality)(56.1)/((g of Sample) ) (1)
The hydroxyl equivalence was calculated to be 0.010 using equation (2).
(OH Eq.)=((g of OH Monomer))/((OH Eq.of Monomer)) (2)
The hydroxyl equivalent weight was calculated to be 1501.00g/eq using equation (3)
(OH Eq.Weight)=((g))/((OH Eq.)) (3)
The value of Tg (theoretical) was calculated using equation (4).
1/T_g =W_1/T_g1 +W_2/T_g2 +W_3/T_g3 +?‹?+W_n/T_gn (4)
The value of Tg(theoretical) was calculated to be 16.18?°C of the synthesized resin. The measured Tg using DSC was found to be 35.55?°C. This result are found in Figure 1 The thin-film coating's Tg was measured using DSC as well and yielded 57.39?°C. This result is found in Figure 2.
Figure 1: Synthesized Resin DSC
Figure 2: Thin-Film DSC
The results of the DSC tests differ than the theoretically calculated ones. This difference is worth noting. The temperature was held constant within the range during the entire experiment (discarding the start-up). The range was 90.0-94.9?°C. A possibility for this variance is the use of older reagents. It is possible that self-polymerization could have began in any of the reagents that were used. This would also explain the variance in the PDI as shown in Table 1. A more likely source of this difference could be directed from the rate of addition. As shown in Table 3, the rate at which the monomer solution was added was not constant at all throughout the addition with a range of rates from 20 to 40 mL per 10 minutes. This would lead to variations in chain length. An advantage of semi-batch synthesis is that one can control the rate of the addition of monomer. The idea is to keep a steady flow throughout the polymerization. This was not maintained, and the results might vary because of it. DMA was supposed to be performed on the thin-film, but the film was too brittle and was impossible to measure.
Table 1: Results of Resin Analysis
Percent Solids MW Mn PDI Tg (theor.) Tg (Actual)
Value 51.88% 17250 10755 1.604 16.18?°C 35.55?°C
SD 0.36%
Table 2: Results of Crosslinked Coating Tests and Characterization
Conical Bend Pencil Hardness Crosshatch Impact (in*lb) Konig Hardness Thickness (?µm) Gloss
Forward Reverse 20?° 60?° 85?°
Value >32% 8H Fail 5B 78.4 62.72 191.67 41.84 164 162.2 114.4
SD 7H Pass 1.25 1.51 7.29 4.66 2.42
Table 2: Data Observed During the Addition of Monomer Solution
Time (min) Amount Added (mL) Temperature (?°C)
0 0 85.1
10 35 91.7
20 25 92.3
30 40 91.3
40 25 91.3
50 20 94.0
56 25 93.0
Despite the temperature maintenance, the conversion rate of the monomer was less than 99%. This is known because of the percent solids value. A percent solids value of 66% would have given a more promising conversion percentage. This value could have been raised by keeping a steady flow and by heating the sample longer and with slightly more initiator. This would increase the conversion rate, but it could lead to other problems with PDI.
This synthesis needs to be run through in one go. The initiation, propagation, and termination are not processes that can simply be paused. If temperature were to drop to room temperature, then termination could end early and leave monomer unreacted or with a low MW. Any loss of solvent over time would also affect the viscosity and make measurements of percent solids more difficult.
GPC was used to find the MW and Mn of the synthesized resin. GPC works by first diluting the resin and passing it through a series of columns. The size of the polymers begin to separate and larger MW polymers do not diffuse through the columns as fast as the lower MW polymers. The amount of polymer that exits the column series over time is measured and compared to a standard polymer.1 It is important to note the limitations of GPC. There are several variables such as the flow rate and elution volume that can give incorrect results.1
As shown in Table 2, this coating had its strengths and weaknesses as far as coating performance goes. The coating could be harder which can be accomplished by increasing the crosslinking. This could be done but not too much to avoid creating an inflexible and brittle coating.
Conclusion
This lab encompassed the synthesis and characterization of an acrylic resin. This resin was then crosslinked with melamine-formaldehyde to form a coating that was then subjected to a solvent resistant test. A correct formulation was found and many panels were created using this coating. The coating was then subjected to numerous tests and processes to learn about its physical and chemical properties.
References
Holding S.R. Gel Permeation Chromatography. Endeavour. Volume 8, Issue 1, 1984, Pages 17-20. (accessed Oct. 2, 2018).
Wicks, Z. W. Organic Coatings: Science and Technology; Wiley-Interscience: Hoboken, NJ, 2007.
Cite this page
Preparation and Characterization of Melamine-Formaldehyde Crosslinked Acrylic Resins. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Review on Large-Area Synthesis of Graphene Films, Transfer Process, Characterizations and Applications
Introduction
Recently, much attention has turned to the structural and electronic properties of carbon-based materials. At present, especially, graphene is the hottest topics in condensed-matter physics and materials science. This is because graphene has not only unusual properties regarding extreme mechanical strength, thermal conductivity and 2-diemensional films, but also peculiar electronic characteristics such as Dirac-particles with a linear dispersion, transport energy gap and simply absorption coefficient of lights (Geim & Novoselov, 2007; Nair et al., 2008). These unique properties mean it could have a wide array of practical uses. In addition to monolayer graphene, few-layer graphene has been extensively studied. For example, bi-layer graphene creates a band gap when an external electric field is applied (Castro et al., 2007; Zhang et al., 2009). Graphene sheets have been produced mainly by exfoliating graphene flakes from bulk graphite and depositing them on the SiO2/Si substrate. However, the size and crystalline quality are not easily controlled. Some groups have grown epitaxially graphene sheets on SiC(0001) (Hibino et al., 2010), however the graphene layers have been widely distributed in thickness. ((N. G. Prikhod’ko et al. RUSSIAN JOURNAL OF PHYSICAL CHEMISTRY B Vol. 8 No. 1 2014)
Graphene, a single layer of carbon atoms arranged in a hexagonal lattice, is a 2D material with outstanding physical properties. The successful isolation of graphene has drawn great interest for experimental investigations and has opened the route for a wide range of potential applications. Mechanically exfoliated graphene from bulk graphite has enabled fundamental investigations on the physical properties of graphene; however, this technique is not suitable for the integration in practical device fabrication processes nor for the synthesis of large surface area devices. For several applications, if one excludes active semiconducting devices exploiting the quantum properties of single carbon layers, a material composed of a few layers graphene (FLG) is also extremely promising. The overall characteristics of graphene ?lms, both single and FLG, such as size, crystallinity, continuity, homogeneity and fabrication reproducibility are mandatory for successful practical application. (R. Giorgi et al).
Synthesis methods
Synthesis of Graphene Films in a Flame
The results of studying the synthesis of graphenes in a premixed propane–oxygen–argon flame at atmospheric conditions are reported. The temperature of 900–950°C and exposure time of 5 min are demonstrated to be suitable for the synthesis of graphene films on a nickel substrate, which is preferable to a copper substrate. It is demonstrated that the formation of graphene layers on the substrate occurs vertically along the flame height, with subsequent changeover to a soot structure. It is shown that the minimum number of graphene layers (two or three) is observed at angles of inclination of the substrate relative to the vertical axis of the flame within 0°–30°. (N. G. Prikhod’ko et al. RUSSIAN JOURNAL OF PHYSICAL CHEMISTRY B Vol. 8 No. 1 2014)
Photograph of a nickel substrate with carbon struc_
ture.
The Ni(111) surface is the excellent substrate for growth of single-layer-graphene sheet with macroscopic dimensions. Graphene sheets with a 1 x 1 atomic structure grew up epitaxially by CVD or surface segregation techniques. We in-situ observed the graphene growth of mono-, bi- and tri-layer step by step using carbon segregation phenomena on Ni(111) by LEEM. The summaries are as follows;
- One can grow the uniform monolayer graphene on Ni(111) by adjusting the temperature. No domain boundaries and wrinkles were detected by LEEM.
- The second- and the third-layer graphene grew at the interface under the first and the second layers. Bi-layer graphene domains grew at least 100?m scale. The third-layer started to grow before the completion of second-layer at 1050K in this experiment. More precise control of temperature seems to be required to complete the second-layer before starting the third-layer growth.
- Shape of the islands differed depending on the thickness; the first- and third layer islands exhibit hexagonal edges, while the second-layer islands possess dendritic edges.
- The different shapes of the first, second and third-layer islands presumably originate from the interfacial-bond strength depending on the graphene thickness.
- The number of nucleation sites of graphene growth is extremely small on Ni(111) surface, which is an important factor for growth of large single-domain graphene crystals.
- Chemical etching the Ni substrate made it possible to separate macroscopic selfstanding graphene sheets.
Synthesis of Graphene Films on Copper Substrates by CVD of Different Precursors:
Graphene ?lms of the order of 1cm2 were grown on copper foil substrates by CVD using hydrogen/methane or hydrogen/argon/ethanol mixturesasgasprecursors. The growth processes were performed near 1,000?C both at atmospheric and low pressures. A system for the fast cooling of the sample, based on the fast extraction from the hot zone of the furnace, was implemented allowing for rapid decrease of the temperature below 600?C in few seconds. Samples grown under different conditions were analyzed by SEM, Raman spectroscopy and XPS with the aim to assess their characteristics and to re?ne the growth process. (R. Giorgi et al).
Scheme of the CVD reactor
Graphene growth process
SEM image of Cu substrate after graphene growth
Films consisting of less than 5 graphene layers have been grown by CVD both from methane and ethanol as precursors. The use of copper substrates has allowed the growth of large area continuous ?lms of the order of 1cm2; a wet procedure was followed for the transfer of the grapheme ?lms on to SiO2/Si substrates more suitable for their characterization. Pressure and growth time have been found to be the main process parameters affecting the thickness and the quality of the graphene ?lms. The grown ?lms exhibited good crystallinity, but resulted composed of different overlapping regions with different number of layer. Factors in?uencing the ?lm homogeneity and uniformity have been identi?ed in the substrate features. Future work will be focused on the optimization of substrate treatments, with the aim to achieve more uniform large area graphene ?lms with controlled structure: number of layers and crystallinity. The matching of the large copper grain size (upto 1mm) with controlled growth of graphene (single and FLG) remains an interesting goal and a high challenge. (R. Giorgi et al).
Large-Area Synthesis of Graphene Films on Copper Foils:
Large-area graphene films of the order of centimeters on copper substrates was grown by chemical vapor deposition using methane. The films are predominantly single layer graphene with a small percentage (less than 5%) of the area having few layers, and are continuous across copper surface steps and grain boundaries. The low solubility of carbon in copper appears to help make this growth process self-limiting. Graphene film transfer processes were also developed to arbitrary substrates, and dual-gated field-effect transistors fabricated on Si/SiO2 substrates showed electron mobilities as high as 4050 cm2V-1s-1 at room temperature. (Xuesong Li.)
Time dependence of experimental parameters: temperature, pressure, and gas composition/flow rate.
SEM images of graphene on Cu with different growth times of (A) 1 min, (B)
2.5 min, (C) 10 min, and (D) 60 min, respectively.
In recent work, thin Ni films and a fast-cooling process have been used to suppress the amount of precipitated C. However, this process still yields films with a wide range of graphene layer thicknesses, from one to a few tens of layers and with defects associated with fast cooling (5-7). Our results suggest that the graphene growth process is not one of C precipitation but rather a CVD process. The precise mechanism will require additional experiments to understand in full, but very low C solubility in Cu (23-25), and poor C saturation as a result of graphene surface coverage may be playing a role in limiting or preventing the precipitation process altogether at high temperature, similar to the case of impeding of carburization of Ni (26). This provides a pathway for growing self-limited graphene films. To evaluate the electrical quality of the synthesized graphene, we fabricated dual-gated FET with Al2O3 as the gate dielectric and measured them at room temperature. Along with a device model that incorporates a finite density at the Dirac point, the dielectric, and the quantum capacitances (9), the data are shown in Fig. 3. The extracted carrier mobility for this device is ~4050 cm2V-1s-1, with the residual carrier concentration at the Dirac point of n0=3.2×1011 cm-2. These data suggest that the films are of reasonable quality, at least sufficient to continue improving the growth process to achieve a material quality equivalent to the exfoliated natural graphite. (Xuesong Li.)
Synthesis of multi-layer graphene ?lms on copper tape by atmospheric pressure chemical vapor deposition method:
Graphene ?lms were successfully synthesized by atmospheric pressure chemical vapor deposition (APCVD) method. Methane (CH4) gas and copper (Cu) tapes were used as a carbon source and a catalyst, respectively. The CVD temperature and time were in the range of 800–1000?C and 10s to 45min, respectively. The role of the CVD temperature and time on the growth of graphene ?lms was investigated in detail via scanning electron microscopy (SEM) and Raman spectroscopy techniques. The results of SEM images and Raman spectra show that the quality of the graphene ?lms was improved with increasing of CVD temperature due to the increase of catalytic activity.
Multilayer graphene ?lms were successfully synthesized on the Cu tapes by APCVD method. The quality of the graphene ?lms was improved with increasing CVD temperature. The growth time does not much affect the number of layers of graphene ?lms. CVD temperature of 1000?C and CVD time of 30min are the optimum temperature and time for growing high-quality graphene ?lms on the Cu tape, respectively. The graphene ?lms were successfully transferred from the Cu tape to other substrates by wet etching Cu with a solution of iron(III) nitrate. (Van Tu Nguyen1. Adv. Nat. Sci. Nanosci. Nanotechnol. 4 (2013) 035012 (5pp)
Self-Standing Graphene Sheets Prepared with Chemical Vapor Deposition and Chemical Etching:
The growth mechanism of graphene layers on Ni(111) surface was studied. The in-situ observation of the graphene growth of mono-, bi- and tri-layers using carbon segregation phenomena on Ni(111) by low energy electron microscopy (LEEM) were reported, which is a powerful technique to investigate thin films in mesoscopic scale. We also fabricated the self-standing graphene sheets by chemically etching the substrate (Odahara et al., 2009). The chemical process to remove the Ni substrate makes it possible to prepare a self-standing graphene sheets, which are characterized by scanning electron microscopy (SEM) or transmission electron microscopy (TEM).
. Typical snapshots of LEEM images obtained as the temperature was decreased from 1200 K to 1125 K (images (a) to (d)). The observed area was 6?m field-of-view. Letter in each image indicates the time-lapse order. Two graphene domains were united to form one graphene sheet. Image (e) is a typical LEEM image of 100?m field-of-view. The surface was entirely covered with monolayer graphene. LEEM images were obtained at the primary electron energy of 3.5 eV. Image (f) is a typical ?LEED pattern observed in the graphenecovered surface. The orientation of the graphene was slightly altered because the sheet is curved.
Typical LEEM images of the graphene growth at different stages: (a)-(b) the first layer growth observed at 1125K, (d)-(f) the second layer at 1050K and (g)-(h) the third layer at 1050K. Image (c) is a typical ?LEED pattern of a 1 x 1 atomic structure obtained from the single-layer graphene-covered surface. Image (i) is the electron reflectivity-energy curves obtained from each area.
(a) A TEM image of a carbon aggregate on the Au mesh with squares 10 ?m × 10 ?m in area, (b) A magnified TEM image of the thinnest area of the carbon aggregate, and (c) its electron diffraction pattern.
A TEM image of the other area in the carbon aggregate (a) and its diffraction pattern (b). One can see clearly doublets of diffraction spots in (b), and new cabon-nano-tube like structures in (a). The hole was covered with double-layer graphene.
Typical Raman spectrum of the monolayer self-standing graphene sheets. Small defect-origin D peak was detected at ?1350 cm-1.
15. The SEM image of single graphene sheet at 5 kV and its LETED pattern (upper right).
The LETED pattern of single-layer graphene at (a) 1 kV and (b) 500 V.
The SEM image of folding double graphene sheet (a) and its LETED pattern at (b) 4 kV and (c) 2kV.
The Ni(111) surface is the excellent substrate for growth of single-layer-graphene sheet with macroscopic dimensions. Graphene sheets with a 1 x 1 atomic structure grew up epitaxially by CVD or surface segregation techniques. We in-situ observed the graphene growth of mono-, bi- and tri-layer step by step using carbon segregation phenomena on Ni(111) by LEEM. The summaries are as follows;
- One can grow the uniform monolayer graphene on Ni(111) by adjusting the temperature. No domain boundaries and wrinkles were detected by LEEM.
- The second- and the third-layer graphene grew at the interface under the first and the second layers. Bi-layer graphene domains grew at least 100?m scale. The third-layer started to grow before the completion of second-layer at 1050K in this experiment. More precise control of temperature seems to be required to complete the second-layer before starting the third-layer growth.
- Shape of the islands differed depending on the thickness; the first- and third- layer islands exhibit hexagonal edges, while the second-layer islands possess dendritic edges.
- The different shapes of the first, second and third-layer islands presumably originate from the interfacial-bond strength depending on the graphene thickness.
- The number of nucleation sites of graphene growth is extremely small on Ni(111) surface, which is an important factor for growth of large single-domain graphene crystals.
- Chemical etching the Ni substrate made it possible to separate macroscopic self standing graphene sheets. (Jian Ru Gong. 2011)
Applications of large scale grapheme
Graphene for nanoelectronics
Graphene shows a glaring ambipolar electric field effect whereby charge carriers can be tuned continuously between electrons to holes. Single layer graphene atop a thermally grown SiO2 layer on a highly doped Si substrate may serve as a prototype of a field effect transistor. Under this arrangement, SiO2 play the role of an insulating layer, so a back-gate voltage can be applied to vary carrier concentration (figure 11b). Early graphene FET devices demonstrated by Novoselov exhibited dopant concentrations as high as 1013 cm–2and achieved a mobility that could exceed 10,000 cm2 /Vs (Novoselov, Geim et al. 2004). This translates into ballistic transport on submicron scales. The room-temperature mobilityis limited by impurities or corrugations of the graphene surface, which means that it can still be improved significantly up to the order of 105 cm2 /Vs (Bolotin, Sikes et al. 2008; Du,Skachko et al. 2008).
Electrons present in graphene act like mass-less comparable particles controlling a majority of its electronic properties. Among the most important results of such uncommon diffusion relation can be seen in the case of half-integer “Quantum Hall Effect” and the unavailability of localization, which can be very essential for graphene-based field effect transistors, (FET), (Geim and Novoselov 2007). Mechanical exfoliation of highly ordered pyrolitic graphite (HOPG) or high purity graphite flakes can lead to the generation of graphene crystals with very few flaws, which in turn show high movability of the charge carriers. Figure 12 shows scanning electron microscopy (SEM) and atomic force microscopy (AFM) of the grapheme based device reported in the literature as having the highest electron mobility to date (Bolotin, Sikes et al. 2008). The graphene film was obtained by mechanical exfoliation of graphite on Si/SiO2 substrate in which the oxide layer below the grapheme was etched in order to obtain a free-standing graphene flake connecting the metal electrodes.
Electrical measurements of resistivity vs. gate voltage show the intrinsic ambipolar characteristics of graphene. It was also established that the transfer characteristics of the device is greatly improved after undergoing a high-current annealing procedure to remove contaminants from the graphene surface. Mobility ? for this device reaches an exceptional value of 230,000 cm2/Vs measured at the highest carrier density n = 2x1011cm-2. Such high mobility would in principle favor high frequency performance. Furthermore, graphene devices pursuing high frequency have demonstrated encouraging characteristics, exhibiting a cutoff frequency fT of 26 GHz, which is the frequency at which the current gain becomes unity and signifies the highest frequency at which signals are propagated (Lin, Jenkins et al. 2008). Only recently, P. Avouris and collaborators reported the fabrication of graphene FETs on SiC substrates with cutoff frequency of 100 GHz for adevice of gate length of 240 nm and using a source-drain voltage of 2.5 V (Lin, Dimitrakopoulos et al.). This fT exceeds those previously reported for graphene FETs as well as those for Si metal-oxide semiconductor FETs for the same gate length (~40 GHz at240 nm) (Meric, Baklitskaya et al. 2008; Moon, Curtis et al. 2009).
CVD graphene for macroelectronics: Transparent conductive films
Another intrinsic property of graphene is its transparency. A single sheet of grapheme absorbs only 2.3 % of the incident light. Such combination of high conductivity and lowlight absorption makes this material an ideal candidate as a transparent conductive film. It isvery tempting to use the unique properties of graphene for technology applications even beyond graphene FET applications. Composite materials, photo-detectors, support for biological samples in TEM, mode-lockers for ultrafast lasers and many more areas wouldgain strongly from using graphene for non-FET purposes.
Graphene applications in photovoltaics
Photovoltaic cells: Graphene vs ITO
Solar energy harvesting using organic photovoltaic (OPV) cells has been proposed as a means to achieve low-cost energy due to their ease of manufacture, light weight and compatibility with flexible substrates. A critical aspect of this type of optoelectronic device is the transparent conductive electrode through which light couples into the device. Conventional OPVs typically use transparent indium tin oxide (ITO) or fluorine doped tin oxide (FTO) as such electrodes (Peumans, Yakimov et al. 2003). However, the scarcity of indium reserves, intensive processing requirements, and highly brittle nature of metal oxides impose serious limitations on the use of these materials for applications where cost, physical conformation, and mechanical flexibility are important.
Cite this page
Review on Large-Area Synthesis of Graphene Films, Transfer process, Characterizations and Applications. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Characterization of Gelatin a Biopolymer for Drug Delivery
INTRODUCTION
Drug Delivery has become an interesting in the present day bringing together Material Science Engineering, Mechanical Engineering, Bio-medical Engineering and Pharm-ology together. This method helps to provide an effective way to use a compound to achieve a therapeutic effect in humans and animals. The technique involves modification of drug release profiles, absorption, distribution and elimination for the benefit of improving product efficacy, safety, as well as patient compliance and convenience.
Gelatin is a naturally occurring macromolecular and bio degradable protein that is obtained from skins, tissues and bones of animals. It has high water solubility, non toxicity, high mechanical strength and elasticity in dry state making it a perfect material for drug delivery. There are two kinds of Gelatin. Type A, with isoionic value of 7 to 9, is inferred utilizing solely corrosive pretreatment. Type B, with isoionic value of 4 to 5, is the aftereffect of an antacid pretreatment.[2]
The usual sources of Gelatin are bovine and pig skins and demineralized bones and hooves. This is a cause for problem for people tend to not consume gelatin based medicines due to religious reasons, mad cow disease and social reasons. To solve this, another source of gelatin was experimented for, extraction of Type B gelatin from fresh water fish L. rohita. In this project we look at its extraction, manufacturing of gelatin and the characteristic properties of the type B gelatin obtained.[1]
PROCESSING
Since there are two sorts of Gelatin, there are two fundamental assembling procedures of Gelatin: Alkaline and corrosive process. The subsequent items can be altogether different as far as piece and physical properties.[4]
Acid handled collagen is absorbed weaken corrosive nature of the acid and after that extricated at about pH 4 for gelatin producing. Non-collagenous proteins and mucoproteins of the tissue are isoelectric at this pH and are consequently not so much solvent but rather more promptly coagulated under the extraction conditions. Contaminants which are expelled along these lines rely upon the quality and birthplace of the raw material and the reproducibility of the producers process.[4]
When managing the basic procedure, the pretreatment of the collagen requires a drawn out absorbing the soluble arrangements (by and large, immersed lime-water). A decent measure of the pollutions (proteins and mucosubstances) are dissolvable at the oppressed pH and are separated. Gelatins from soluble base process will in general be cleaner than corrosive created gelatin, yet this variety might be because of the assembling process.[4]
Moreover to the challenges referenced beforehand, the produce of gelatin is liable to more confusions. For instance the aggregate number of carboxyl gatherings accessible for ionization relies on the extraction technique. Distinctive gelatins can have diverse proportions of acidic and essential gathering in this manner distinctive isoelectric focuses. Charged gatherings impact the cooperations between nearby gelatin atoms, between every particle and the dissolvable and between various parts of a similar atom, as the protein chains are adaptable. The degree of these factors change with pH, and are likewise subject to the aggregate ionic sythesis of the framework in this way an itemized depiction of the dissolvable and additionally that of the gelatin is a necessity.[4]
CHARACTERIZATION
Sample Preparation
Fish scales of L.Rohita having weight of 100-200g were collected at Guntur , Andhra Pradesh , India. The scales were removed using hands, kept in air tight bags and on ice for preservation. Gelatin was extracted using the alkali method. Dried scales were stirred in a solution of 5% NaCl at 30 min room temperature. This step was repeated twice and then the scales were stirred with 4% NaOH to remove the non-collagenous proteins from the scales. To remove lipids from the scales, iso-butyl alcohol was used. This step was repeated three times in a digital linear shaker. The final step was dimineralisation which was done using 0.5N EDTA at a pH of 7.66 for four different time periods of 12h, 2h, 2h and 1h. The scales were then dried on plastic trays and grinded to obtain the gelatin in powder form.[1]
Gelatin Yield
[1]The gelatin yield is given by the following equation
Yield of Gelatin=(Weight of dried Gelatin)/(Dry weight of Fish Scale)?—100
Determination of Gelatin pH
To determine the pH of Gelatin, prepare a 1% gelatin solution in distilled water and cool to 25?°C in a water bath. The pH is then measured.
UV-vis Spectroscopy Analysis
Ultraviolet and Visible absorption spectroscopy is the measurement of the attenuation of a beam of light after it passes through a sample or after reflection from a sample surface. Absorption measurements can be at a single wavelength or over an extended spectral range. UV spectroscopy is used for:
- Detection of functional groups
- Detection of Impurities
- Qualitative analysis
- Quantitative analysis
- Single compound without chromophore
- Drugs with chromophoric reagent
Ultraviolet absorbption spectra arise from transition of electron within a molecule from a lower level to a higher level. A molecule absorbs ultraviolet radiation of frequency(V), the electron in that molecule undergoes transition from lower to higher energy level. The energy can be calculated by
E_1-E_0=hV
The UV-vis absorption spectrum was recorded using a UV vis double beam spectrophotometer in the range of 200-400nm.
Fourier transform infrared(FITR) spectroscopy analysis
FTIR is a technique based on vibrational spectroscopy. AN infrared spectrum is obtained using fourier transforms. Fourier Transfroms help to represent a range of Sinusodial waves as a single summition function. Presently FITR has replaced the dispersive method as it has a higher signal to noise ratio.[3]
The IR enters the interoferometer. It has a beam splitter and 2 mirrors. The beam splitter splits the radiation, One beam strikes a moving mirror and one beam strikes a fixed mirror. The two beams then combine and intract with the sample and then go to the detector. The difference in wave length is plotted and then used to determine a fourier transform.[3]
100 mg of KBr and 2mg of Gelatin was mixed and placed in FITR equipment. FITR was performed at room temperature and measurements were taken.
X-Ray diffraction
To determine the Crystal structure, X ray diffraction was used. The operating voltage was 45kV and the operating current was at 40mA. The radiation used was Cu K?±.[1]
Scanning Electron Microscopy
Scanning Electron Microscopy was used to determine the surface morphology of Gelatin. Gelatin was coated with gold in a vaccum sputter and a photograph was taken.[1]
RESULTS AND DISCUSSION
Yield of Gelatin
The gelatin extracted was yellow and the weight percentage was 24% which is higher than the yield of red tilapia(7.81%) and black tilapia(5.39%)(found in previous studies)[1]
Characterization results
The UV-vis spectroscopy showed absorption at 224nm, indicating a presence of strong peptide bonds. Absorption at 210-240nm shows the presence of peptide chromophore. Since the operational range was 200-400nm, the results can be considered valid. Refer figure 2.[1]
Figure2. UV-vis absorption spectrum of fish scale gelatin
FITR measurement made it possible to identify the amino groups present. Also it helped to identify the bonds responsible for the structural and functional stabilization. Amino acids are made of amide bonds which can be identified by the specific amide band that is absorbed at the FITR measurement. [1]
Peaks(cm-1) Possible functional bands
3433 Hydrogen bond
1630 Amide-I
1565 Amide-II
1240 Amide-III
1460 Symmetric Bending
1380 Asymmetric Bending
Table 1- FITR results of Gelatin
Figure 3. FTIR spectrum of gelatin
Porosity is determined by Scanning electron microscopy analysis. Porosity characterization is the determination of open pores which determine properties like permeability and the surface area of the porous structure.The microstructure obtained shows that the polymer has an array of hollow cells. Higher density shows greater mechanical strength and the higher porosity shows a better biological environment. The SEM micrograph shows a decent balance between the two.[1] Refer Figure 4.
Figure 4. SEM micrograph of gelatin extracted from fish scale
The XRD diffractogram shows a sharp peak with low intensity at 2??=7?° and a broad peak at 2??=19?°. This is usually assigned to the triple helical crystalline structure of gelatin.[1]
Conclusion
The study shows that L Rohita is a good source of raw material for Gelatin production. It can be determined that gelatin has the necessary porosity for drug delivery and has a triple helical crystalline structure. The FITR shows it chemical bonds. Thus Fish based Gelatin is an effective source of gelatin for drug delivery.
References
Merina Paul Das, Suguna PR, Karpuram Prasad, Vijaylakshmi JV, Renuka M. Extraction and characterization of gelatin: a functional biopolymer. Int J Pharm Pharm Sci 2017;9(9):239-242.
Gavasane AJ, Pawar HA (2014) Synthetic Biodegradable Polymers Used in Controlled Drug Delivery System: An Overview. Clin Pharmacol Biopharm 3:121.doi:10.4172/2167-065X.1000121
Materials Characterization: Introduction to Microscopic and Spectroscopic Methods by Yang Leng, Second Edition (ISBN No. 978-3-527-33463-6) John Wiley & Sons (2013)
Felix, Pascal Georges, "Characterization and correlation analysis of pharmaceutical gelatin" (2003). Graduate Theses and Dissertations. https://scholarcommons.usf.edu/etd/1365
Cite this page
Characterization of Gelatin A biopolymer for Drug Delivery. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Identification and Characterization
Pancreatic cancer (PC) ranks as the fourth leading cause of cancer death in both men and women in the United States, leading to the loss of over 40,000 lives a year. PC is a highly heterogenous group of diseases, as malignancies can develop from both the endocrine and exocrine pancreas. Pancreatic ductal adenocarcinoma (PDAC), an exocrine tumor, accounts for approximately 85% of PC cases. While it only occurs rarely -- 2% of all malignant tumors--, it is devastating as the average survival after diagnosis is only one year. The low survival rates partly arise from the fact that PDACs do not manifest with specific symptoms and therefore are often diagnosed at a late stage. On average, patients are diagnosed at age 71, but the risk of developing PDAC rises with age.
While a lot of research is being conducted, major clinical advances have not been achieved due to several unique obstacles. PDACs dynamically interact with their stroma, producing a dense fibrous tissue around the tumour that functions as a barrier to systemic drug delivery and penetration. Currently, the lack of specific serological biomarkers is hindering the development of screening programs to enable earlier diagnosis. Additionally, metastatic spread often occurs early, even before the primary tumour can be detected by current methods. Because of these complexities, there is an enormous need for new detection methods and treatment options.
In the last decades, monoclonal antibody-based strategies have been widely utilized in the field of oncology. The affinity and antigen-specificity of antibodies make them attractive for therapeutic and diagnostic applications, and render them excellent tools to study cellular and molecular processes. Antibody binding may have different effects, as it can cause the neutralization, relocation or even degradation of its targets. These characteristics have earned them the name 'magic bullets of medicine'. In the clinic, antibodies against PDL1 and CTL4 are currently widely used in the treatment of hematological cancers and show great succes. Several currently available antibody therapies have been tested in PDAC, but almost none were found to be effective (PMC4714146). New attempts to generate customized antibodies with specialized binding properties, half-lives and functions through structural modifications, have opened up new avenues for the exploitation and application of antibodies. Major advances in clinical efficacy have been made through the modification of Fc regions, resulting in the optimization of anti-tumor immune responses in what tumor?. In the in vitro setting, the addition of localization signals onto antibody fragments allows for subcellular targeting. This strategy is limited for full-size antibodies which, due to their reliance on intrachain disulfide bonds and pairing of heavy and light chain variable regions, cannot be properly assembled in the reducing environment of the cytoplasm. Full size antibodies can be shrunk down to single-chain variable fragments (scFv) (image 1b), the variable regions of the heavy and light chain, connected by a glycine or glycine-serine linker.
The discovery of heavy chain only antibodies in camelids opened up a new avenue for antibody-based strategies. The antibodies within this unique subset of IgG-type immunoglobulins, are composed of two identical heavy chains bearing a variable domain (VHH), but lack lightchains and CH1 domains. The variable domain consists of four framework regions, which connect its three hypervariable loops (CDR). They are derived from a single cluster of D and J segments that serve both conventional and heavy chain only antibodies. There is strong sequence homology to human V region, particularly to the Vh3 family (PMID: 19010777). The paratope of the variable domain is determined by its three CDRs, but interaction with the antigen occurs primarily via CDR3. The CDR3 loops are longer than in conventional antibodies, enabling them to reach sites which are inaccessible to conventional Vh-Vl pairs, such as clefts or depressions on the proteins surface (PMID: 22886243) . Consequently, the range of epitopes probed by heavy chain only antibodies may differ from conventional antibodies. Promising in the identification of targets for which no regular antibodies are available.
Heavy chain only antibodies can be shrunk into variable region fragments, which are often referred to as nanobodies or VHH. Nanobodies come with an unique set of advantages and characteristics, including small size (15kD), higher tissue penetration rate compared to conventional antibodies, high thermostability, good solubility, low immunogenicity and ease of production. Furthermore, the singular nature of VHHs allows them to be used for intracellular expression. While conventional antibodies cannot be properly assembled in the reducing intracellular environment, VHHs are ideally suited for cytosolic expression because of their reduced reliance on disulfide bond formation. This trait has been productively exploited for in vivo cellular imaging and to confer new phenotypic traits (e.g. viral resistance). Lastly, the pharmacokinetic behaviour of VHHs, specifically their short half-life, is well suited for applications where fast clearance is required such as tumor imaging, or delivery of toxins and radioisotopes. This can be modulated by formatting, pegylation or fusion to serum albumin.
Installation of any payload onto a VHH requires site specific, chemical modification, which can be realized via enzymatic methods, such as sortase mediated transpeptidation. Sortase recognizes a specific aminoacid motif (LPXTG), which can be installed on the C-terminus of a VHH, and ligates it with an oligoglycine nucleophile of choice. This method combines the precision of a genetically encoded tag with the specificity of enzymatic reactions, and affords construction of proteins conjugated with non-genetically encoded functional groups or non-natural moieties. Through sortase mediated installation of radioisotopes, the in vivo distribution of the labeled VHH can be tracked with positron emission tomography (PET). This can be applied to follow immune responses over time in a non-invasive manner in live animals, or to image any target structure for which specific probes are available.
The protein family of integrins has been implicated in many processes within the context of cancer, including proliferation, migration and invasion (PMId: 20029421), and have been discussed as putative biomarkers and targets for therapy. Integrins, consisting of a and b subunits, are unique amongst transmembrane receptors because of their capacity to signal bi-directionally, enabling them to sense, modulate and respond to extracellular stimuli. Several integrins have been found to be differentially expressed in PDAC, including a2b1 and avb6. The AvB6 integrin is only expressed on epithelia during embryogenesis, wound healing and neoplasia, but it does not occur on healthy, adult tissue. Additionally, its expression during cancer can act as prognostic factor and is associated with poor patient survival. The A2B1 integrin Nanobodies that recognize these surface receptors could potentially be used to image and treat pancreatic cancers, and would be important additions to the research toolbox.
Previously, alpaca immunizations have been carried out with purified integrins and pancreas cancer organoids of human and mouse origin. From this immunization campaign, nanobodies against ?±V??6, ?±4??1 and ?±5??1 have been isolated via yeast display technology, which have yet to be biochemically characterized. Additionally, a new immunization with a2b1 has been completed, for which a phagemid VHH library has been constructed which was screened via phage display. Specificity of the identified VHHs has to be established. In this project, the individual nanobodies will be expressed with C-terminal sortase recognition motif to enable site-specific installation of fluorophores, metal chelators and other payloads. The final products will be used for flow cytometry, to establish antigen specificity and for biochemical characterization.
- A2b1
- AVB6
- A4b1
- A5B1</li
Regarding cancer cells, integrins serve roles in numerous aspects, including proliferation, survival, migration and invasion (1).
The protein family of integrins, cell bound adhesion molecules, could serve as tumour specific targets.
a2b1 - has shown to be involved in the malignant phenotype of pancreatic cancer.
a5b1 - MM - induction and/or upregulation of expression associated with progression in patient Bx 1 OvCa - expression significantly correlates with higher clinical stage 48
avb6 - Itegrin avb6, a member of the integrin protein family, is overexpressed in numerous types of carcinomas, such as colon, lung, cervical, ovarian, and pancreatic cancers, but is expressed at low or undetectable levels in healthy organs (3). Pancreatic ductal adenocarcinomas exhibit the highest integrin avb6 expression among gastroenteropancreatic adenocarcinomas (4). Moreover, the high expression of integrin avb6 in carcinomas is a prognostic factor of the disease and is correlated with poor patient survival (5,6). Thus, molecular imaging agents that target integrin avb6 would be highly useful in the receptor-targeted detection of pancreatic cancer and in the noninvasive monitoring of tumor prognosis
5. Bates RC, Bellovin DI, Brown C, et al. Transcriptional activation of integrin beta6 during the epithelial-mesenchymal transition defines a novel prognostic indicator of aggressive colon carcinoma. J Clin Invest. 2005;115:339“347. 6. Hazelbag S, Kenter GG, Gorter A, et al. Overexpression of the avb6 integrin in cervical squamous cell carcinoma is a prognostic factor for decreased survival. J Pathol. 2007;212:316“324.
Overexpressed:
66. Nieberler, M. et al. Exploring the Role of RGD-Recognizing Integrins in Cancer. Cancers (Basel) 9 (2017).
95. Sipos, B. et al. Immunohistochemical screening for beta6-integrin subunit expression in adenocarcinomas using a novel monoclonal antibody reveals strong up-regulation in pancreatic ductal adenocarcinomas in vivo and in vitro. Histopathology 45, 226- 236 (2004).
a4b1 -
MM - expression low to absent in early disease, emergent and upregulated during disease progression in patient Bx 1, 25, 46, 47
Through this bidirectional linkage, integrins provide spatially restricted communication lines between the cell and their microenvironment, act as key mechanosensing and force-transducing units and coordinate actin cytoskeletal polymerisation to regulate important biological functions including proliferation, gene expression, cell survival and cell motility (Legate et al, 2009). In the setting of cancer, these same integrin-mediated processes are exploited by tumour cells to promote invasiveness and oncogenic survival and to engineer a host microenvironment that is conducive to tumour growth and metastatic spread.
ANTIBODY blockade references
20. Callahan MK, Wolchok JD. At the bedside: CTLA-4- and PD-1-blocking antibodies in cancer immunotherapy. J Leukoc Biol. 2013;94(1):41“53. [PMC free article] [PubMed]
21. Hamid O, et al. Safety and Tumor Responses with Lambrolizumab (Anti-PD-1) in Melanoma. N Engl J Med. 2013 [PMC free article] [PubMed]
22. Topalian SL, et al. Safety, activity, and immune correlates of anti-PD-1 antibody in cancer. N Engl J Med. 2012;366(26):2443“54. [PMC free article] [PubMed]
23. Brahmer JR, et al. Phase I study of single-agent anti-programmed death-1 (MDX-1106) in refractory solid tumors: safety, clinical activity, pharmacodynamics, and immunologic correlates. J Clin Oncol. 2010;28(19):3167“75. [PMC free article] [PubMed]
24. Ohaegbulam KC, et al. Human cancer immunotherapy with antibodies to the PD-1 and PD-L1 pathway. Trends Mol Med. 2015;21(1):24“33. [PMC free article] [PubMed]
Key characteristics of VHHs include their high affinity and specificity (equivalent to conventional antibodies), high thermostability, good solubility and strictly monomeric behavior, small size (2.5‰nm in diameter and about 4‰nm in length; ~15‰kDa), relatively low production cost, ease of genetic engineering, format flexibility or modularity, low immunogenicity, and a higher penetration rate into tissues (3, 41“44). The short half-life of VHHs in blood circulation is well suited to certain applications such as tumor imaging or delivery of toxin or radioisotopes to diseased tissues where rapid clearance is required. However, the pharmacokinetic behavior of VHHs can also be improved by extending their half-lives using different formatting options, including PEGylation or fusion to serum albumin or an anti-serum albumin moiety (43, 45, 46).
As intrabodies, VHHs are also ideally suited for cytosolic expression due to their ability to fold in the reducing intracellular environment. This feature likely reflects the single disulfide linkage present in the VHH domain, as compared to the multi-domain structure and multiple disulfide linkages of conventional antibodies, and may not be completely general to all VHHs but appears to be quite common; intracellular expression of VHHs has been widely and productively exploited for in vivo cellular imaging (5, 57) as well as to inhibit the function of viral proteins (58, 59)
a sngle cluster of D and J segments that serves the heavy chain only antibody variable regions as well as the variable regions of convntional IgGs. Later genomic studies shed light on the origin of HCAbs in dromedary camels and alpacas. It is now established that HCAbs are produced from the same igh locus as conventional antibodies but with distinct sets of genes for the generation of HCAbs. (18641337)
the paratope of the VHH is generated by its three CDRs, to achivie the requisite buried surface area when contacting antigen, the cdr3 loops are longer than in conventional antibodies. VHHs typically bind in clefts or depressions on the protein surface, or at domain-domain and subunit-subunit interfaces.
While interaction with the antigen occurs primarily via CDR3, eamples of extensive involvement of CDR2 or the framework have been identified as well. Consequently the range of epitopes sampled by VHHs may overlap but are not identical. A striking example is a VHH directed against HIV gp120, believed to bind to and protrude into its CD4- binding pocket, which may explain the broad neutralizing potential oft this particular llama derived VHH.
with particular emphasis
a2b1 als marker
CA-19-9
Integrins background
Monoclonal antibody-based treatment of cancer has been established as one of the most successful therapeutic strategies for both hematologic malignancies and solid tumors in the last 20 years. The initial combining of serological techniques for cancer cell surface antigen discovery with hybridoma technology led to a series of landmark clinical trials that paved the way for new generation antibodies and subsequent clinical success. The modulation of immune system interplay with tumor cells through targeting of T cell receptors has emerged as a powerful new therapeutic strategy for tumor therapy and to enhance cancer vaccine efficacy.
Clinical trials demonstrated that PD-1 blockade is an attractive way to reinstate host's immune function in lymphoid neoplasms, particularly classical Hodgkin lymphoma. PD-1 blockade as a single therapy or in combination with other immune checkpoint inhibitors are explored in other hematologic cancers.
aims of this project
Material & methods
VHH sequences from phage panning library
readily available from ploegh lab & collaborators
Sequence analysis
Results
A. Phage based panning
Figure 1. A. Sequence analysis. Flow diagram, (B) sequence alignment 5 sequences
(A)
(B)
Figure 2. (A) Alexa coupling strategy, (B.) LCMS
Figure 3. (A) Flowcytometry results (B) KD determination
B Previously identified integrin specific VHH
Figure 1. Sequences
Figure 2. FACS data
Figure 3. Competition assays?
B. PVDF membrane based panning
Figure 3. Flow diagram amounts
Figure 4. immunoprecipitation
Figure 5. Mass spec results
Figure 6. Targets identified?
Discussion
Recap
Broad context
Critical perspective on own work
perhaps too many rounds of phage panning?
Future direction
Coupling of radioactive particles to as therapeutic
coupling of dyeing agent to it to enhance tumour visibility surger
Near infrared photoimmunotherapie moiety
Resistance to therapy is a characteristic feature of PDAC, and the extent of resistance is greater than in many other human tumors. This could be due to inefficient drug delivery, intrinsic and acquired resistance of the tumor, tumor hypoxia, or the insensitivity of cancer stem-like cells to currently used agents.
Cite this page
Identification and characterization. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Extraction and Characterization of Bacterial DNA
ABSTRACT:
In this experiment, deoxyribonucleic acid (DNA) was extracted from Micrococcus lysodeikticus, a bacteria with a high guanine and cytosine content. The standard method of chloroform-isoamyl alcohol extraction was used, and the DNA was solubilized in Tris buffer. The DNA was then quantified and qualified using UV spectrophotometry. The DNA was determined to have been extracted in relatively high amounts, but the purity was lacking, especially in regards to ribonucleic acid contamination. The hyperchromic effect was utilized to gauge the purity of the DNA. Future studies will focus on the ecologically safer and more efficient methods of DNA extraction.
Key Words: Ribonuclease, RNA, DNA, DNA structure, bacteria. Deoxyribonucleic acid (DNA) is the genetic material inside of a cell. The code of nucleotide base pair sequences allows for the cell to translate messages into proteins that can be used in all parts of the cell. DNA is a double helix that is composed of rungs being the base pairs that are hydrogen bonded together, and a sugar phosphate backbone that is covalently bound together1. This structure gives the DNA the strength yet flexibility it needs in order to be able to unzip so that the genetic material it holds can be accessed2.
The complementarity of the DNA molecule is truly what sets the molecule apart from others. Even if the DNA is denatured and the hydrogen bonds come undone, because each base pair binds to only one specific type of other base pair, this phenomena allows the DNA to spontaneously come together again. The base pairs will line up with their respective pair due to the energetic favorability of having as many hydrogen bonds as possible2.
Because of DNA's importance as a biological storage molecule, accessing the DNA trapped inside of the cells has become a main avenue of research for scientists. Depending on the organism, different solvents and methods can be used to yield the highest amounts and most pure DNA.
Even within the same types of organisms, there may be different challenges that arise from a specific species. Specifically in bacteria, different strains can have higher or lower G+C content3. Because the guanine and cytosine nucleotides are bound together by three hydrogen bonds as opposed to the 2 that bind adenine and thymine together, the G+C bonds are more difficult to pull apart. This means that harsher solvents may have to be used in order to adequately extract and solubilize the DNA.
When the DNA is extracted, it can be denatured as a way to assess its purity. The hyperchromic effect is a phenomena that occurs when DNA is denatured so that the two strands of the double helix come apart and assume a formation that is random and coiled4. Because there is much more surface area in the randomly coiled DNA than the uniform and compact double helix form, the DNA will absorb substantially more UV light at a wavelength of 260nm.
In this experiment, the DNA of Micrococcus lysodeikticus was extracted using a chloroform-isoamyl alcohol method and was precipitated using cold ethyl alcohol. Following this partial purification of the DNA, the DNA was quantified and qualified using UV spectroscopy. The hyperchromic effect was employed to ascertain the purity of the DNA by denaturing the molecule using high tempertaures.
EXPERIMENTAL PROCEDURES
The experiment was conducted as described by Boyer2.
To begin the experiment, a set amount of freeze dried bacterial cells of the species Micrococcus lysodeikticus were massed out and suspended in a set volume of saline-EDTA in a 50mL Erlenmeyer flask. An aliquot of 10mg/mL lysozyme was added to the bacterial solution and was incubated at 37?°C for 30 minutes.
After the incubation, an aliquot of 25% sodium dodecyl sulfate was added to the mixture and was heated at 60?°C for 10 minutes, after which the mixture was cooled to room temperature. The mixture was then added to a Nalgene centrifuge tube.
An aliquot of 5M sodium perchlorate solution was added to the solution, followed by a generous portion of 24:1 chloroform-isoamyl alcohol. This mixture was then gently shook for 20 minutes with gloves on so that the proteins in the mixture could be separated out. The solution was then placed in a refrigerated centrifuge at 7800rpm for 5 minutes. The aqueous upper layer of the mixture was then carefully transferred into another container, where ice cold ethyl-alcohol was gently poured over it in generous amounts in order to precipitate the DNA. The nucleic acids were then spooled and transferred to a set volume of Tris buffer. Once a sufficient amount of DNA had been spooled, the aqueous DNA mixture was split into two portions. In one sample, RNAse was added and inverted to mix. Both samples were then stored in a 4?°C refrigerator for one week, and the rest of the procedure was conducted.
Both the DNA with and without RNAse were used to make separate dilutions so that the Absorbance at 260nm was approximately 0.400. From both of the dilutions, 2mL aliquots were added to microcentrifuge tubes, and the two different types of DNA solutions were treated in the same fashion. One sample was kept at room temperature, one tube was denatured in boiling water for 10 minutes and then left to cool to room temperature slowly, and the other tube was denatured in boiling water for 10 minutes and then placed in an ice bath to cool to room temperature quickly.
Absorbance values were gathered from the room temperature samples at 260nm and 280nm. The absorbance for the room temperature sample at 260nm was recorded again at the end of the experiment. The sample that was heated and allowed to cool slowly had the first absorbance recorded immediately after it was taken out of the boiling water. Another 260nm absorbance value was taken after the sample had cooled down to room temperature. The sample that was heated and cooled in an ice bath had its absorbance measured after it was at room temperature.
To analyze the data gathered from this experiment, the following two equations were used to determine how much DNA was isolated.
Equation 1. Determining the concentration of the DNA solution.
Equation 2. Determining the amount of DNA isolated.
As a point of comparison, the theoretical amount of DNA that could be isolated from the mass of cells used was determined using the following.
Equation 3. Determining the theoretical mass of DNA.
Next, the following equation was used to determine the percent of the cell's weight that is composed of DNA.
Equation 4. Percentage of cell weight that is DNA
Then the purity of the DNA could be determined given the ratio of absorbance at 260nm to the absorbance at 280nm.This also lends insight as to what possible impurities could be in the DNA, such as protein or RNA (ribonucleic acid). The following equation demonstrates the change in the absorbance values at 260nm to reflect what occurred in terms of the hyperchromic effect.
Equation 5. Percentage of change in the absorbance values.
DNA mass (??g)
Theoretical DNA mass (??g)
Percent Cell Weight (%)
A260/A280
Percent Change A260 for Hot & RT (%)
Percent Change A260 for Hot-RT & RT (%)
Percent Change A260 for Hot-Ice-RT & RT (%)
DNA
1004
1787
0.75
2.71
12.5
-4.44
6.77
DNA with RNAse
794
4.51
31.2
-6.98
25.9
Table I. Summary of results from the spectrophotometry data collected regarding the DNA and DNA with RNAse. Equations 1 through 5 were used to determine the values listed in the table below.
RESULTS
The results of the experiment are listed in Table I. The calculations were performed for both the DNA samples and the DNA with RNAse samples.
DISCUSSION
In this experiment, various quantities surrounding the DNA sample were determined so that the amount of DNA and the purity of the DNA could be assessed. This was possible due to the scientific knowledge of the structure of DNA.
The theoretical and actual masses of the DNA were very close to each other, indicating that a sufficient amount of DNA was isolated and the isolation process was performed accurately. Additionally, the percent cell weight was the same as the theoretical cell weight of 0.75%. However, the ratio of A260/A280 values lend insight as to the purity of the DNA, and the ratio is much higher than the pure DNA value of 1.90. This suggests that the RNAse was ineffective at removing an adequate amount of the RNA. However, the A280 value used seems to be skewed, so that it unnecessarily inflates the ratio. Further spectrophotometry would have to be done using the same DNA in order to determine the most accurate ratio for the sample.
The hyperchromic effect took hold for both the sample that was boiled and quickly cooled and the sample that was boiled and the absorbance value was immediately taken. There appeared to be a large effect from the speed cooling versus the slow cooling, which makes logical sense considering that the more time given to the DNA to arrange itself back into its complementary sequence, the more precise and accurate job that the intermolecular forces will do. However, if the DNA is cooled quickly, it does not have time to rearrange precisely. Rather, the DNA could get stuck in tangles and therefore absorb more UV light that the normal double helix.
While the chloroform isoamyl procedure that was used in this experiment was effective, it is important to always strive for more efficient and/or more ecologically friendly alternatives to hazardous chemicals. In a study by Cheng and Jiang, the researchers developed a method of DNA extraction that allowed to cut the time it takes to extract DNA from microbes in more than half, and also eliminated the need for a separate cell wall lysing agent5.
Another reason to be on the lookout for different procedures for DNA extraction is due to the blossoming field of microbiome research. In a study by Knudsen et al, the choice of DNA extraction methods influenced the data that was garnered about the population in the microbiome6. Therefore, microbiome researchers must be cognizant of how experiments early on in the project could affect assumptions down the line. Reasons such as these mean that science must always be advancing into newer and better techniques.
REFERENCES
(1) Yakovchuk, P.; Protozanova, E.; Frank-Kamenetskii, M. D. Nucleic Acids Res. 2006, 34 (2), 564“574.
(2) Boyer, R. Modern Experimental Biochemistry, 3rd ed.; Roberts, B., Lake, J., Prescott, M., Eds.; Benjamin/Cummings: San Francisco, 2000.
(3) Sanders, C. A.; Yajko, D. M.; Hyun, W.; Langlois, R. G.; Nassos, P. S.; Fulwyler, M. J.; Hadley, W. K. J. Gen. Microbiol. 1990, 136 (2), 359“365.
(4) Lara Castellazzi, C.; Orozco, M.; Amadei, A. J. Phys. Chem. B 2013, 117, 8697“8704.
(5) Cheng, H. R.; Jiang, N. Biotechnol. Lett. 2006, 28 (1), 55“59.
(6) Knudsen, B. E.; Bergmark, L.; Munk, P.; Lukjancenko, O.; Prieme, A.; Aarestrup, F. M.; Pamp, S. J. bioRxiv 2016, 1 (5), 064394.
Cite this page
Extraction and Characterization of Bacterial DNA. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Extraction Characterization and Dye Adsorption Ability of Chitin from Crab Shell Watste
ABSTRACT
Chitin is the second most abundant natural polysaccharide after cellulose and is present in the crustacean exoskeleton like crab, shrimp, insects and fungi. It is the main structural component of the exoskeletons of the animals like insects and crustaceans. Crab, shrimp, squilla and fish scale waste is ideal raw material for chitin production. The present work is aimed at extraction of chitin from crab shells. The methodology include acid hydrolysis, demineralization followed by deproteinization step. The chitin produced is analysed by FTIR based on the interpretation of the spectrogram of the two samples of chitin synthesized in the present work, it can be said that all functional groups expected are seen. The applications of the chitin are numerous but the study is focused on dye adsorption ability.
INTRODUCTION
The shell fish industry which is prominent in all costal countries generates about 60,000 to 80,000 tons of waste (Muzzarelli et al,1986). Even though the wastes are biodegradable, the dumping off large quantities makes degradation process slow resulting in accumulation of waste overtime which is a major environmental concern. A quick and effective solution to this is recycling of shell wastes and extraction of commercially viable substances like chitin from them. Chitin on its own has various applications. Chtin is a natural polysaccharide of major importance. This biopolymer is synthesized by enormous number of living organisms and it belongs to the most abundant natural polymers, after cellulose (Rinaudo et al,2006).
Expelling of dyestuff into water resource system causes major threat to the environment. Adsorption is the cost effective and potential method to remove the dyes from the effluents. Effluents from various industries contain harmful coloring agents, which have to be removed to maintain the quality of the environment. Paper, fabric, leather and dyestuff production are some of the industries that release harmful effluents (Lin S, Lin Cet al,1993).The aim of the present study was to investigate the chitin adsorption capability on major industrial dye, Methylene Blue.
MATERIALS AND METHOD
Sample preparation- Crabs were collected from Dapodi fish market, Pune. Crabs inedible parts including head, body shells and tails were removed from the whole body for extraction of chitin. The crab shell were washed and air dried and used for extraction.
Extraction of chitin-
Crab shell waste Crab shell powder Demineralization Deproteination Extracted Chitin
Figure 1.Flow chart of basic steps of Extraction of Chitin.
Process I-
10 grams of sun dried crab shell waste was demineralized by adding 1.5 N HCl at room temperature for 1 hour. Acid was discarded and the shells were washed with distilled water until the pH is neutral. The shells were then de-proteinized with 0.5% NaOH at 100°C for 30 minutes. Protein solution was removed and washed thoroughly with distilled water and the pH was checked. The de-proteinization process was again repeated, for that 3% NaOH was added to the sample at 100°C for 30 minutes. After draining the residual proteins along with the effluents, the sample once again washed and the pH was observed till it was approximately near to neutral. Hence the chitin slurry was obtained. The excess water was removed. The alkali was drained off and washed thoroughly with distilled water until the pH is less than 7.5 and then dried at ambient temperature (30 ?± 2?°C)
Process II-
10 grams of crab shell waste were refluxed in 100ml of sodium hypochlorite (NaCLO)solution at 100oc for 10 minutes. The NaCLO solution was decanted and the powder was washed with distilled water. The above step was repeated once more. The sample was again refluxed in 50ml of 1M HCL at 75oC for 15 minutes. The solution was decanted and washed with distilled till it becomes neutral. The sample was then refluxed in 50ml of 1M NaoH(sodium hydroxide)solution at 100oC for 2 minutes to remove any protein residues. The solution was decanted and remaining sample was washed with distilled water till it becomes neutral. They were filtered off and placed in an oven at 60oC for a week.
Characterization of Chitin
Solubility Test - Chitin dissolves completely in 1% Acetic Acid. For the estimation of chitin produced the sample was taken out of the storage and weighed . Then the sample was put inside a clean beaker and 10 to 20 ml of 1% acetic acid was added to it. The solution was kept in shaker for 30 to 40 minutes. Then the sample was taken out and weighed, carefully( Abhrajyoti Tarafdar et al,2013)
FT-IR Spectroscopy: The samples were analysed by FT-IR spectroscopy in Istrumentation Centre Solapur university and the graph depicts wave number versus percent transmission.
(Pandharipande S et al,2016)
Dye adsorption by Chitin:Stock solution of the dye was prepared by taking 10mg of methylene blue powder and adding it to 1000ml of distilled water( Paula Szymczyk et al,2015).The pH of the dye solutions was adjusted using 1 N NaOH or 1 HCl.About 1g of extracted Chitin(adsorbent) was added to 100mL of dye solutions(adsorbate). A control was also maintained without addition of chitin. At specific time intervals, aliquots of 2-3 ml suspension were filtered and used to evaluate the adsorption of dye.The absorbance spectrum of the supernatant was subsequently measured using UV-Vis spectrophotometer. Concentration of dye adsorption was calculated by the absorbance value at 668nm.
Percentage of dye adsorption was estimated by the following formula:
% adsorption = 100 — [(CO €’C)/ C0] Where: C0 is the initial concentration of dye solution and C is the concentration of dye solution after Adsorption( S. Dhananasekaran et al,2015).
RESULTS AND DISCUSSION
Extraction of chitin from crab requires harsh chemical treatments. The crab shells even though contains majority of chitin, also has proteins and minerals. Proteins are removed by deproteinization and carbon and other salts are removed by demineralization( Badawy et al,2011).
Process I “ Solubility test for sample 1
Initial weight of chitin was measured to be 0.40 gram. Final weight of chitin after reaction with 1% acetic acid was measured to be 0.22 grams and hence the total dissolved weight of chitin was calculated to be 0.18 grams.
Proecss II - Solubility test for sample 2
Initial weight of chitin produced was measured to be 0.40 grams .Final weight of shells after reaction with 1% acetic acid was measured to be 0.20 grams and hence the total dissolved weight of chitin was calculated to be 0.20 grams. Therefore, it was observed that chitin produced employing Process II was more readily soluble in 1% acetic acid solution then that produced through Process I.
FTIR Analysis- The interpretation of FTIR analysis of the samples is done for the possible presence of functional groups and the details are given in Table 1(Dhananasekaran S et al,2016)
Table 1
Sr.no Standard chitin wavelength in cm-1 Crab chitin wavelength in cm-1 Groups
Sample-1 Sample-2
1. 3300-3250 3373 3278 N-H
2. 2891 2952 2920,2826 C-H
3. 1680-1660 1653 1647 C=O
4. 1560-1530 1560 1568 Amide
5. 1072 1017 1024,1094 C-0-C
6. 952 952 901 Amide III
7. 750-650 667,625 685,617 N-H
The FT-IR spectra of chitin isolate from crab shell are given in Figure 2 and 3
Theoreitically,?±-chitin is characterized by three characteristic amide bands appearing at 1650,1620, and 1550 cm-1.In this study we observed FTIR bands at 1653 cm-1 and 1560 cm-1 for chitin sample 1 in figure 2.Here peak at 1653 cm-1 corresponds to symmetrical deformation to vibration of amide I band stretching C=O and 1560 cm-1 Coressponds to N-H deformation of amide II(Muhammed, R., et al,2010)
FIG 2: FT-IR spectra of chitin (Sample 1) FIG 3: FT-IR spectra of chitin(Sample2)
In sample 2,Figure 3 the peaks are observed at 1647 cm-1 and 1568 cm-1 which corresponds to symmetrical deformation to vibration of amide I band stretching C=O and N-H deformation of amide II respectively. From interpretation of FT IR it can be said that all functional groups which are during synthesis have been identified in the form of peaks that include amide,carbonyl and hydroxyl groups.This indicates the successive formation of chitin biopolymer(Muhammed, R., et al)
Dye adsorption by Extracted Chitin
Graph 1:Effect of pH on removal of dye Graph 2:Effect of contact time on removal of dye
Graph 1 shows the relationship between pH values and percentage removal of dye .The readings were taken having varying pH between 4-9 and between intervals of 30 minutes. A result shows that the effectiveness of dye adsorption onto chitin was decreasing along with the increasing pH value. Here, the effect of dye adsorption is found maximum at pH7 i.e the neutral.
The dye removal percentage with contact time between dye and extracted chitin is shown in Graph 2. The range of observed contact time was 30 -180 minutes with the increment of 30 minutes. It is observed that with increase in incubation time the effectiveness of dye adsorption by chitin increases. The % removal was found to be maximum at 180 minutes at pH 7 such as after 3 hours compared to initial readings. The % removal was found to be 39% after 180 minutes.
CONCLUSION
Chitin is one of the most abundant biopolymers in nature and is a major component in the supporting tissues of organisms such as crustaceans, fungi, and insects. It has wide application in various fields. This study shows the production of chitin from crab shell. The FTIR and chemical characterization studies confirm the production of chitin. In this study removal of dyes by adsorption using crab shell(chitin) was investigated. This study monitored the ability of chitin for removing dyes from aqueous solutions. Interaction between the chitin and dye were found to be strongly dependent on pH of the solution. The maximum percentage of dyes reduction was obtained at an optimum contact time 180 minutes and optimum pH of 7.Crab shell chitin has been found to be comparatively better adsorbent because it can remove almost 39 % of dyes within 3 hours. Finally, the result of adsorption study, it is concluded that chitin can be used as a coagulant of dyes because of its higher adsorptive capacity, cost effectiveness, environment friendly behavior and availability in nature.
Cite this page
Extraction Characterization and Dye Adsorption Ability of Chitin From Crab Shell Watste. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Characterization of Rice Husk Varieties in Uganda for Biofuels
Uganda has registered recent successes in rice production making it a potential rice basket for Eastern Africa (Africa Rice Center, 2013). The remarkable rice boom in Uganda is attributed to good farming practices, premium market prices, and favorable policies that have stimulated large private investment in the rice sector over the last few years. As a result, paddy production jumped from about 120,000 t in 2002 to more than 220,000 t in 2011 (Africa Rice Center, 2013). Rice husk (RH) is the waste generated during the milling process of rice. On average 20% of the paddy rice is husk (Giddel and Jivan, 2007), giving an annual global total production of about 144 million Mt (FAO, 2011). Several conversion techniques exist that can be utilized to extract energy from rice husks. They include thermo-chemical, bio-chemical and physico-chemical conversion techniques (Delivand et al., 2011). Besides the use of rice husk for production of biofuel, it can also be used to manufacture bio-products such as particle boards, insulation boards, ceiling boards and as a fertilizer.
Moreover, the burning of rice husk under controlled conditions generates rice husk ash (RHA) which is a valuable material for industrial applications due to its high silica content. RHA contains 87“97% silica which can serve as a source for many silicon-based materials (Rozainee et al., 2008). In addition, the ash from biomass fuel contains only trace amounts of heavy metals, which makes them fairly easy to dispose off and can also be a good fertilizer (Olanders and Steenari, 1994).
Despite the potential benefits associated with rice husk, only about 17% of the rice husk produced globally, find an application per year (Ankur, 2010), suggesting that very large quantities become a waste. The common practice of burn-ing rice husk without heat recovery and openly dumping of these wastes still prevails especially in developing countries to which Uganda is no exception (UNEP, 2009). In other places, the rice husk is left to rot in situ, subsequently emitting greenhouse gasses (GHG) and causing other environmental problems (Jeng et al., 2012). Moreover in many develop-ing countries, the utilization of rice husk is constrained by unavailability of improved technology. The chemical differ-ences between biomass components directly influence their chemical reactivities. This is why the knowledge of the total amount of each component is crucial to predict the efficiency of a biomass conversion process (Ando et al., 2000). Similarly, the utilization of RHA depends on the composition of ash which in turn depends on the plant species, growth conditions and ash fraction (Demirbas, 2005). This study therefore set out to investigate the properties of rice husk from ten selected rice varieties in Uganda. This was done so as to precisely determine the possible technologies for potential utilization of the husks from each rice variety. Additionally, an economic comparison was made to evaluate the benefits of electricity production from rice husks as opposed to diesel engine gen-erators.
Materials and methods
Sample collection and preparation
Ten varieties of rice grains were collected from one geograph-ical region in Uganda. The samples were de-husked in an identical de-husker. Approximately one kg of husk from each rice variety was carefully kept in separate sampling bags. The samples were each subsequently milled to pass through a 1 mm mesh sieve using a 1093 cyclotec sample mill for sub-sequent characterization.
Characterization of rice husk varieties
The characterization work involved bulk density, proximate analysis, ultimate analysis, lignin, hemicellulose and cellulose analyses. Three experimental runs were conducted for each parameter and then the average value obtained.
References:
Please cite this article in press as: Olupot, P.W., et al., Characterization of rice husk varieties in Uganda for biofuels and their techno-economic feasibility in gasification. Chem. Eng. Res. Des. (2015), https://dx.doi.org/10.1016/j.cherd.2015.11.010
Cite this page
Characterization of Rice Husk Varieties in Uganda for Biofuels. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Testing and Applications of Composite Materials
Young's modulus is an important giveaway of the material's capabilities to withstand changes in the length of its specimen when a stress is applied along its length, whether it is compression or elongation. This valuable information is crucial in making decisions about which composite material is suitable for what application. Therefore, the combination of DIC and FEA methods analysis is a great approach if one wants to obtain a stress-strain curves that include fracture strain in a composite material, which makes the determination of Young's modulus very easy.[4]
Tensile testing proves to be one of the most successful traditional methods of testing for composite's strength and its corresponding characteristics. However, not every composite material can be tested equally and under the same experiment setup. Namely, the type of reinforcement of a composite material plays a great role in this determination. If one recalls that there are continuous and discontinuous fibers used for reinforcement, it is important to remember that when the continuous fibers are used, the material obtains unidirectional properties, meaning that the properties of the material are specifically oriented to one location. Different properties are obtained after applying loads in different directions; along, across and normal to the direction of the fibers. Because of these characteristics, unidirectional composites are challenging to test under tensile testing. Namely, the traditional clamps for tensile testing used to hold the specimen in place may induce unwanted stresses at the clamped specimen areas, which will disrupt the experiment results and accuracy of the further results analysis. This paper proposes several ideas on how to improve the existing setup for tensile stress testing. One recommendation is to use a thin specimen for testing in order to reduce the stress concentration across the depth of the specimen, reducing numerically any unwanted induced stresses. However, it is important for the specimen to still retain the original material properties, meaning that one needs to carefully choose the minimum depth that one wishes to test. This approach requires more details and information about the tested composite, specifically the minimum thickness of the specimen that can successfully represent the materials properties. Another recommendation is to use longer clamps in order to have uniform stress distribution over a bigger part of the specimen area, meaning that the stress concentration will not be contained only in a small outside area of the specimen. This approach is somewhat limiting as it will leave a very small specimen area that can be actually tested. Lastly, this paper indicates that American Society for Testing and Materials (ASTM) has suggested a different approach as a solution to this issue. ASTM suggestion is that if the clamps used for tensile testing are made of the same material as the composite, the induced stresses will be minimized. This approach has proven to be successful in several experiments, but it can be limiting to experiments where the specimen composite is relatively weak and not well bonded or reinforced. For most common applications, such as high-performance applications, this would not be the case, so the approach will be valid. [5]
After reviewing several existing testing methods, this paper suggested a new method for testing the composite materials for their strength characteristics. Namely, this experiment will test the composite under fatigue and static loadings. The main concept behind this experiment is to keep the end-to-shear load ratio constant along with the load translation. This is possible by using a specific mechanism that is able of controlling a gripping pressure of the loaded specimen. The main idea behind this experiment setup was a great success of a similar experiment used for static loading; now, a similar but significantly updated setup will be used for fatigue load testing. The clamps and several other fixtures that are used to secure the specimen in place were updated and advanced so that the experiment results are not influenced by a specimen slipping from the frame or by having clamps slide over the specimen surface. This modification from previous tests allows for accurate results of the fatigue strength. By keeping the end-to-shear ratio constant, the test can be repeated for several trials (or cycles) in order to yield different fatigue strengths that will help in creating a strength model specific for the composite material in question. Fatigue strength, also known as endurance limit, is a value of the stress that a material specimen can endure for several trials (or cycles) without breaking apart. By performing a specified number cycles of testing with different strength values, one can expect to have the specimen fall apart, as the strength and hardness of the composite material will decrease with the increase of cycles. Obtaining the highest possible stress values that a material can withstand for a specific amount of time is a very valuable and important piece of information to know when making decisions about which composite material to attribute to a given application. If the application for which the material is intended requires a strong material that can withstand many stresses applied over time (for example, bolts of the lawn mower must have high endurance because of the constant vibrations coming from the mower), choosing a strong material is crucial. However, applications that do not require extremely strong materials used can be performed with a lower-cost composites that will satisfy the required performance and be cost-effective at the same time. The above described method is, therefore, very useful in the analysis and categorization of the composite materials for various applications. Such experimental setup yields accurate fatigue strength results, which greatly help in making cost-effective decisions. [6]
Several testing methods for composites have been explored and presented, but if one thinks in terms of applying them to every kind of composite available, a question arises: all of these methods mainly rely on applying force to the specimen, so how can a brittle composite specimen be properly tested? Many of the current methods do not allow or may not be suitable for low force applications and testing, so some new methods must have been introduced. One of them is a Modified Short Beam Shear specimen, also known as MSBS. This specimen is used to suitably test brittle specimens for their interfacial or attachment strength. This specimen is made as a sandwich of two reasonably thick aluminum plates and the brittle composite that is supposed to be tested is laying between them. This setup allows for safe and accurate measurement of the strength of the composite material in question. One of the most brittle composites are made of thermosetting vinylester resin that is reinforced with carbon tubes (multi-walled), also known as MWCNT. This sandwiched configuration allows for this brittle composite to be properly tested and explored for its strength characteristics. However, this setup does not produce a pure shear stress state in the composite, but the result is still applicable since the characteristics of this type of composite can be altered with chemical additions and changes. The whole experimental setup post-analysis is performed by using a finite element analysis method. This allows for the experimental results to create a suitable model that can be applied throughout different scenarios where this composite is being used. The failure mode is also a combination of normal and shear stresses, which is a typical testing result for very brittle composites like this one. Overall, this setup provides for a safe and experimentally successful environment in order to test composite materials for their strength properties even if they are categorized as highly brittle composite materials. Although this test specifically refers to thermosetting vinylester resin that is being reinforced with multi-walled carbon nanotubes (MWCNT), considering the success of this experimental method, it is worth exploring if it will be applicable to other composite materials of the same or even higher level of brittleness.[7]
Cite this page
Testing and Applications of Composite Materials. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Extraction and Characterization of a Three Known Component Mixture
To identify each recovered fraction purity and identify FTIR spectroscopy was collected, TLC was conducted, and MP for benzoic acid was concluded.
TLC was used to determine the purity of each component using an ideal eluent system that most efficiently separates every compound on a TLC plate. To determine the idea eluent five different mixtures of hexane and ethyl acetate were tested with pure samples of O-toluidine, Benzoic acid, and Anisole, and a mixture of each three pure compounds on the TLC plate to compare and conclude which eluent system separated the three components from the mixture most efficiently. This was determined by comparing the spots in each lane of a TLC plate for each eluent. Table 2: Ideal Eluent System shows how the ideal eluent system was chosen using only the Mixture to compare each eluent system.
Table 2: Ideal Eluent System
Eluent System
Compounds of Mixture
Distance compound traveled (cm-1)
Deviation
100% Ethyl Acetate
Anisole, Benzoic acid, O-toluidine
3.2, 2.7, 3.0
0.3
1:3 Ethyl Acetate/ Hexane
Anisole, Benzoic acid, O-toluidine
0.7, 1.5, 3.1
0.8
1:1 Ethyl Acetate/ Hexane
Anisole, Benzoic acid, O-toluidine
1.5, 2.0, 2.5
0.5
3:1 Ethyl Acetate/ Hexane
Anisole, Benzoic acid, O-toluidine
0, 0.4, 3.0
0.6
100% Hexane
Anisole, Benzoic acid, O-toluidine
1.5, 2.6, 3.2
0.6
After analysis, it is clear that the ideal eluent system is 1:1 ratio of Hexane and ethyl acetate as the deviation between the distance each mixture moved was exactly 0.5 every time which is the most constant deviation compared to the other eluent systems. This ideal eluent was then used to analyze the extraction samples by conducting the same experiment only with the extracted fraction compounds.
Retention factor (Rf) values of each TLC plate were then calculated. A compound with low polarity will have a larger Rf value than a polar compound when compared. Thus, Rf values can be used to identify a compound and its purity by comparing an unknown compound (extraction factions) to the known pure compounds. If the Rf values are the same, the two compounds are most likely the same. Rf values were determined by subtracting the distance (cm) traveled by the sample by the distance (cm) traveled by the eluent. Table 3: Rf values of Ideal Eluent TLC Plate shows the Rf values of the extracted fraction TLC Plates.
After analysis, it is clear that the ideal eluent system is 1:1 ratio of Hexane and ethyl acetate as the deviation between the distance each mixture moved was exactly 0.5 every time which is the most constant deviation compared to the other eluent systems. This ideal eluent was then used to analyze the extraction samples by conducting the same experiment only with the extracted fraction compounds.
Table 3: Rf values of Ideal Eluent TLC Plate
Eluent System
Compound
Distance Traveled (cm)
Rf Value
1:1 Ethyl Acetate/ Hexane
Eluent
Pure Anisole
Pure Benzoic acid
Pure O-toluidine
3.5
1.5
1.3
2.0
N/A
0.43
0.37
0.57
1:1 Ethyl Acetate/ Hexane
Eluent
Anisole Fraction
Benzoic acid Fraction
O-toluidine Fraction
3.2
1.5
1.7
1.7
N/A
0.47
0.53
0.53
Based off of Rf values it is clear that the 1:1 ratio of Ethyl Acetate/ Hexane efficiently separated every three compounds from the mixture as the Rf values of the pure samples compare relatively closely to the Rf values of the fractions.
FTIR spectrums of each isolated compound from the Acid-Base were then collected and analyzed, and the major non-fingerprint peaks in each fraction are listed in Table 4: FTIR Fingerprint Peaks to determine the purity of each extraction fraction.
Table 4: FTIR Fingerprint Peaks.
Compound
FTIR Band (cm-1)
Anisole Fraction
2931.97, 2862.49 sp3-sp3 C-H stretch
1634.00sp3-sp3 N-H
1498.20 sp2-sp2 C-C aromatic
1311.42 sp3-sp3 C-O stretch
Benzoic acid Fraction
3067.77, 2935.13, 2862.49 sp2-sp3 C-H stretch
1965.60, 1915.07 sp2-sp2 C=O stretch
1675.06, 1602.42, 1586.63 sp2-sp2 C=C stretch (in-ring)
O-toluidine Fraction
3462.53, 3358.31, 1618.21 sp3-sp3 N-H stretch
3020.40, 2928.81, 2862.49 sp3-sp3 C-H stretch
1583.47 sp2-sp2 C=C stretch
By comparing the FTIR band values of the extracted fractions to correlated structures, the identity of each fraction and its relative purity was concluded. Anisole Fraction was partially pure with signals of O-toluidine as an sp3-sp3 N-H band at 1634.00 cm-1 was present on the FTIR. For Benzoic acid and O-toluidine fractions, the purity level was very high as each specific FTIR band correlated to the structure of the compound. Benzoic acid has a specific sp2-sp2 C=O stretch band at 1965.60 cm -1 and 1915.07 cm-1 present, and O-toluidine had three sp3-sp3 N-H stretches present at 1618.21 cm-1, 3462.53 cm-1, and 3358.31cm-1.
Acid-Base Extraction resulted in the isolation of each three components, however with low % recovery; % recovery for O-toluidine was 0.57%, for Benzoic acid 1.17%, and for Anisole .95%. On the other hand, purification of each component after analyze of each FTIR Spectrums concludes that each component is highly pure as each FTIR Spectrum has the specific bands correlating to each specific compound and the melting point range of the recovered Benzoic Acid was concluded to be 123.3-126.0 ?°C which is approximately 2 degrees from the literature value of 122.0 ?°C.
Conclusion
Overall Acid-Base extraction as a means of separating and purifying organic compounds was successful at separating each three components with purity as each specific FTIR band correlated to the structure of the compound. Benzoic acid has a specific sp2-sp2 C=O stretch band at 1965.60 cm -1 and 1915.07 cm-1 present, and O-toluidine had three sp3-sp3 N-H stretches present at 1618.21 cm-1, 3462.53 cm-1, and 3358.31cm-1.
Also, Acid-Base extraction was not an efficient system to obtain a high % recovery. Overall % recovery was significantly low for all three components. Benzoic acid had the highest % recovery (1.17%) and O-toluidine had the lowest % recovery (0.57%).
Future goals to test the % recoveries of Acid-Base extraction could involve using a mixture of 1:1:1 ratio of O-toluidine, Benzoic Acid, and Anisole and using a 1:1 ratio of stronger acids and bases when converting the Bronsted- Lowry Acid and Base to their conjugate bases and acids.
Experimental
All reactants were reagent grade and provided by the University of Colorado Denver Chemistry Department. FTIR spectra were obtained with a Nicolet iS5 ATR FTIR spectrometer, and all measurements were in cm-1.
Isolation of o-toluidine:
Methylene chloride (10 mL) was added to a stock solution (10 mL, Anisole, Benzoic acid, O-toluidine). The organic solution was extracted with 3 M hydrochloric acid (3 x 20ml). The aqueous layers were combined and adjusted to 12 pH with 10 M sodium hydroxide (10 mL). The neutralized aqueous solution was extracted with dichloromethane (3x). The organic layer was dried over magnesium sulfate then collected via gravity filtration. The resultant organic solution was condensed in a tared scintillation vial in a hot water bath yielding reddish/yellowish oil, 0.856g (8.56 mmol, 0.57 %); FTIR (cm-1): 2931.97, 2862.49, 1498.20, 1311.42. Note thin layer chromatography plates developed in 1:1 Ethyl Acetate/ Hexane system.
Isolation of Benzoic acid:
5% sodium hydroxide 15 mL was added to the remaining stock solution. The organic solution was 22 ?°C with 6 M hydrochloric acid, five mL. The aqueous solution was allowed to cool in an ice bath until no more precipitation formed. The aqueous layer was collected via vacuum filtration. The resultant aqueous solution was collected in a tared a scintillation vial yielding white precipitate, 0.106g (1.06 mmol, 1.17 %); 123.30-126.0 ?°C (122 ?°C); 3067.77, 2935.13, 2862.49, 1965.60, 1915.07, 1675.06, 1602.42, 1586.63.
Isolation of anisole:
Methylene chloride (5 mL) was added to the organic layer from the previous extraction. The organic layer was dried over magnesium sulfate and then collected via gravity filtration. The organic layer was dried over magnesium sulfate then collected via gravity filtration. The resultant organic solution was placed in a tared scintillation vial in a hot water bath yielding reddish pinkish oil, .901 g (9.01 mmol, 0.95%). FTIR (cm-1): 3462.53, 3358.31, 1618.21, 3020.40, 2928.81, 2862.49, 1583.47. Note thin layer chromatography plates developed in 1:1 Ethyl Acetate/ Hexane system.
References
1. Padias, Anne. Making the Connections a How to Guide for Organic Chemistry Lab Techniques. Hayden McNeil Publishing. 2007. P 119.
2. Interpretation of Experimental Data. Web. 19 Oct. 2018 https://chemistry.syr.edu/totah/che276/support/6a1.handouts/4.datainterp.pdf
3. Thin Layer Chromatography. Web. 19 Oct. 2018 https://academics.wellesley.edu/Chemistry/chem211lab/Orgo_Lab_Manual/Appendix/Techniques/TLC/thin_layer_chrom.html
4. Clark, Jim. Thin Layer Chromatography. June 2016. Web. 19 Oct. 2018 https://www.chemguide.co.uk/analysis/chromatography/thinlayer.html
5. Experiment 2 - Thin Layer Chromatography. P 2. Web. 19 Oct. 2018 https://webpages.uidaho.edu/chem276/files/2%20-%20Thin%20Layer%20Chromatography.pdf
6. Mohrig, Jerry, Christina Noring Hammond, Paul F Schatz. Techniques in Organic Chemistry. W.H. Freeman & Company, United States, 2009. 1st ed., pp. 151-162.
Cite this page
Extraction and Characterization of a Three Known Component Mixture. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/
Characterization of Amyloid Fibrils and Protective Effects of Silibinin
Amyloid fibrils are abnormal, fibrous protein deposits that grow on the outer membrane of the cells. They are insoluble and do not function to provide structural support or motility in humans. Amyloids are known to show major impact on diseases like Alzheimer's and type II diabetes which progress over a period of time and are associated with high mortality (1). There are no effective treatments known for amyloid-related diseases, therefore, searching for compounds that can effectively inhibit the formation of amyloid fibrils and/or disaggregate the preformed amyloid fibrils will act as a more promising treatment (2, 3). Silibinin are extracted from the medicinal plant Silybum marianum, also known as milk thistle, and have traditionally been used for the treatment of liver diseases (4). In a research, it was shown that silibinin could interrupt the complex structure of the amyloid protein, and transform the fibrils into shapeless aggregates hence decreasing its effects on the amyloid-related diseases (5). However, there are various forms of amyloid-fibril proteins. This paper focuses on amyloid-fibril formation and their characterization in various different amyloid-fibril proteins.
Further, this paper will explore cytotoxicity induced by amyloid fibrils affecting patients who take insulin. The term 'amyloid' was coined initially by Schleiden and then by Virchow in the mid-19th century to describe the iodine stained deposits seen in the liver at an autopsy. Initially, the deposits were thought to be high in carbohydrate until their high nitrogen content was later established. However, the inaccurate name persisted despite the discovery of its highly proteinaceous composition (7). Through microscopic studies and transmission electron micrographs, it was confirmed that amyloid carried a fibrillar or thread-like structure. Further advances in the biomedical and biophysical arena helped to isolate amyloid fibrils from the tissues and use X-ray fiber diffraction to exhibit the cross-?? structure (8, 9). Amyloid fibrils are highly stable and insoluble which makes them very useful in a large number of naturally occurring bionanotechnology. However, fibrils can also be destructive as they have the ability to accumulate in the tissue and form basis of diseases or aggravate a given disease (1). In order to understand more about amyloid fibrils, it was necessary to isolate them without completely dissociating them. The highly stable structure of amyloid fibrils made it hard to isolate them from the tissues without affecting its structural integrity of the fibrils.
Therefore it became necessary to devise an effective method of isolation. Cohen and Calkins in their paper, The Isolation of amyloid fibrils and a study of the effect of collagenase and hyaluronidase, provided an effective method to isolate the amyloid fibrils without destroying their structural integrity. Hepatic tissues with excessive amyloid fibrils were extracted from post-mortem patients and was extracted using subsequent centrifugation and washing. The extracted fibrils and normal liver tissue as a control were then treated with the enzymes collagenase and hyaluronidase and then centrifuged. On observing it in the electron microscope, it was noted that only the amyloid fibril rich liver tissues showed fibrils, hence a new technique to isolate fibrils was discovered (9). Cohen and Calkin's work was also one of the primary works to identify that fibril, a major component of amyloid, is not a collagen. It was also identified that fibrils do not contain high amounts of hyaluronic acid, making it less soluble in water (1,9). In order to identify if collagen is the primary component of the fibrils, the fibrils were treated with a buffer and collagenase solution as the control and experimental reactions. The collagenase based reaction was incubated at different time periods to identify if there as more growth in the collagenase based solutions.
The quantity of the fibrils was found to be roughly the same in the control and the experimental preparations at different time intervals. Ineffectiveness of collagenase indicated that there was no collagen present in the fibrils as in the presence of collagenase, the fibrils would have grown if they had collagen in them (9). Isolation of amyloid fibrils sparked a growth in the research based on amyloid fibrils and the diseases associated with amyloid fibrils. In a study, Westermark isolated amyloid from a lymph node of a medullary thyroid carcinoma, using the method devised by Cohen and Calkin. It was found in this study that amyloid fibrils carried major protein unit. The presence of a major subunit brought about a major advancement in understanding the structure of amyloid fibrils which later helped in finding an inhibition mechanism for amyloid formation, that affects diseases like Alzheimer's and diabetes type I (9, 10, 11). The structure of the fibril isolated from the thyroid carcinoma was studied by isolating a tumor tissue from a patient having medullary thyroid carcinoma with metastases. The amyloid was isolated using collagenase enzyme, as shown by Cohen and Calkins. After the centrifugation in the presence of required buffers, it was run using an SDS page to separate the different protein components. The acquired protein was then purified using gel filtration and observed under an electron microscope. The amyloid fibrils were observed in the form of small clumps.
On performing spectrophotometry, it was confirmed that no peaks were shown in the range where amyloid fibril would normally be observed. Knowledge of the structure of amyloid fibrils helps in understanding the process of pathology of the amyloidoses and provides a structure of rational design for drugs to inhibit or reverse amyloid formation (11). This can be beneficial in various diseases like diabetes and Alzheimer's. It has been suggested that nearly all proteins have the ability to form amyloid under certain conditions, which has implications for the understanding of protein folding (12). Amyloid precursor proteins do not share a common size, sequence or secondary structure, yet the mature fibrils appear to share similar highly organized multimolecular morphology and mechanisms of toxicity (13). This indicated that there is a possible connection in the formation of different types of amyloid fibrils. Sletten et al in their study characterizing amyloid fibril proteins further elaborated that human amyloid fibrils contain two different components: one soluble and the more heterogeneous component. The soluble component is called the P component and is immunologically homogeneous meaning they are made of same kind of proteins. The other component, the fibril, is more heterogeneous meaning that it is made of different kind of proteins. Amyloid fibrils were first acquired from lymph nodes of the medullary carcinoma of the thyroid and isolated using the Cohen and Calkin's method of enzyme degradation.
Antisera against amyloid proteins prepared and treated to the fibrils. Finally gel filtration and SDS page were conducted in order to look at the various band indicating proteins of different sizes and charges (14). As discussed earlier, two different components were found. It was also observed that the fibrils mainly consisted of a low molecular weight protein which was immunologically distinct and did not react with various antisera against known amyloid fibril proteins (15). In a study it was shown that the influence of the number of non-fibrillar or the P-component on amyloid related diseases is well known. However the authors noted that non-fibrillar proteins increase protein strength and density of the local interactions of fibrils, causing them to forma compact, and localized structure, as seen in different amyloid related diseases. Therefore, the soluble component, or the P-component play an important role in forming aggregation state and solubility of the amyloid fibrils, hence affecting amyloidoses or amyloid deposits (19) Amyloid deposits have varied origins and chemical compositions, however they still have common systematic forms. The common systemic forms as seen in Sletten et al., are related to the serum amyloid protein or immunoglobulin chains but in amyloidosis the fibril proteins may contain hormone-like peptides such as procalcitonin in medullary carcinoma of thyroid and islet polypeptide in pancreatic islets, especially in Type 2 diabetes (15, 16, 17).
Insulin, a polypeptide, has long been known to be capable of conversion in vitro to a fibrillar, amyloid-like, form (16). Fibrillary insulin has recently been demonstrated clinically significant localised amyloidosis in microscopic quantities around the needle tip after prolonged subcutaneous infusions of insulin in rats, and in a human diabetic subject (18). In a study done in 1988, major amyloid fibril protein was extracted and, by means of its amino acid composition and amino acid sequence, it was shown to contain intact insulin molecules (15). Diche et al. in their study conducted in 1988 described the amyloid deposits occurring in an insulin-dependent diabetic patient and give the analytical finding of extracted amyloid fibril protein. A young man with type 1 diabetes mellitus developed a localized amyloidosis at the sites of the injections of insulin. This amyloid fibril was extracted and histopathology was conducted on the tissues. This was followed by immunohistochemistry, by staining proinsulin and an antiserum to insulin. The tissue was also homogenized to obtain amyloid fibrils and purify them (15). It was found that amyloids form by partial proteolysis of larger precursor molecules, which cases of localized amyloidosis, as seen in the diabetic patient, is expressed close to the site of deposition. The lower molecular weight fragment molecules spontaneously polymerize into fibrils. Amyloid P-component was also demonstrable in the deposits, as it has been in every other amyloid (15).
Katebi et al. look at the potential of silibinin to interact and inhibit the amyloid formation in bovine insulin (insulin derived from a cow). It looks at the ability of insulin to attach to silibinin. It further looks at the protective effects of silibinin in the cell against the toxicity caused by the amyloid fibrils on neuroblastoma cells-- cancer formed in early forms of nerve cells (3, 6). It looks at the ability of insulin to attach to silibinin. It further looks at the protective effects of silibinin in the cell against the toxicity caused by the amyloid fibrils on neuroblastoma cells-- cancer formed in early forms of nerve cells (3, 6). The specific effect of silibinin on the fibrillation of bovine insulin was studied by using specific methods of amyloid detection using color assays. These assays are methods that use a dye to color the amyloid fibrils and observe the changes in it using a highly efficient microscope. The assays help in quantifying the change in the amyloid fibril formation under various conditions such as the different concentration of silibinin, or temperature difference (20, 21). It was found that higher doses of silibinin caused the inhibition of the amyloid fibril formation, in a given time. It was measured that the time taken for the formation of the first phase of the new structure or interaction increased from 2 hours to 4 hours in the presence of silibinin.
This indicated that silibinin interferes in the early formation of the insulin complex, which increases the reliability of silibinin as an effective treatment. Impact of silibinin on the structural changes cause in insulin due to amyloid fibrils was studies using color assays. It was concluded that the presence of silibinin lowers the intensity of the color, which indicates that silibinin has inhibiting effects on structural changes in insulin due to the amyloid formation (3, 22). In the second part of the study, the protective effects of silibinin on insulin amyloid fibril that causes toxicity in cancer cells in nerves was analyzed. This was analyzed using an assay that acts a sensitive and reliable indicator of the cell metabolism. This assay analyzes the shift of a yellow based dye to purple based on the activity of a mitochondrial enzyme that helps in the removal of hydrogen from the cells. The assay was conducted as a cell viability test for the effect of silibinin on the amyloid. It was concluded that cells exposed to silibinin at various concentrations did not show any evidence of toxicity in a 24 hour period (3, 23).
The study gives important insights into the mechanism of amyloid fibril-induced neuronal cell death and the action of silibinin. It also emphasizes the potential application of silibinin to prevent the treatment of amyloid-related diseases like type II diabetes and Alzheimer's. Amyloid fibrils are extensively known for its affect on major diseases like Alzheimer's and diabetes, hence it is very important to understand their structure, and formation. Structure and formation not only provide a way to derive methods to stop their growth. With the help of isolation technique devised by Cohen and Calkins, scientist were able to further study structure and formation of amyloid fibrils. This further enabled them to understand in what ways do fibrils affect diabetes. Such studies help not only to provide a way for other scientists to replicate what has been already done but also pursue research t a higher level. By knowing the structure and characterization of the amyloid fibrils, Katebi et al. were able to understand the protective effects of silibinin on insulin amyloid fibrils.
Cite this page
Characterization Of Amyloid Fibrils and Protective Effects Of Silibinin. (2019, Mar 18).
Retrieved November 5, 2025 , from
https://studydriver.com/2019/03/page/16/