All through the years, there has been an increasing emphasis on teaching English as an instrument for communication, and technology has played a critical function in facilitating authentic communication. The movement of language teaching objectives and practices has changed from the printed word and knowledge of language systems to the use and communicative value of the spoken language in the everyday setting (Vanderplank, 1993). In a sense, the efficacy of multimedia has drawn great consideration and is presumed, under the theory of adding an additional channel of media to send out a message, to significantly improve communication and comprehension (Dwyer, 1978).
Don’t waste time! Our writers will create an original "Teaching English" essay for youCreate order
Multimedia technology (like TV, computers, networks, emails video cassette recorders (VCRS), compact disc ready-only memories (CD-ROMs) and interactive multimedia) aids the teaching technique of integrating real-life situations with the target language into the language classroom. In this meticulous setting, learners slowly expand their language acquisition by being exposed to the authentic environment of the target language.
According to one of the most outstanding theories of second language acquisition, Krashen (1985) proposed that learners can learn a large amount of language unconsciously through ample comprehensible input. The Input Hypothesis, stated by Krashen, argues that the use of a target language in real communicative environments and the stress on rich comprehensible input by exposing the learners to the target language in the classroom facilitate their language acquisition. In other words, language acquisition only happens when comprehensible input is suitably delivered. In this respect, language teachers struggle to employ a wide range of teaching techniques to make authentic situations and to promote learners’ language acquisition.
Many researchers have presented strong evidence that multimedia (like computers, video, and TV) have helpful effects on language learning due to rich and authentic comprehensible input (Brett, 1995; Egbert & Jessup, 1996; Khalid, 2001). Results of these studies demonstrated the significance of the use of multimedia develops learners’ language performance in reading, listening comprehension and vocabulary recognition. One survey study by the American Association of School Administrators showed that 94 percent of teachers and supervisors believe that technology has enhanced students’ learning considerably. Similarly, many English-as-a-Second Language (ESL) teachers concur that educational technology presents many possibilities for progressing students’ language proficiency, including their vocabulary, reading, listening, and speaking.
Similarly, television programs and videos have created a place in the communication of information and are powerful apparatus in improving language teaching (Anderson & Lorch, 1983). Both TV and videos communicate not only visually through pictures but also aurally throughout the spoken word, music and sound effects. The subtitle, a key role on television and videotapes, is coordinated with the dialogue or narration of the program’s audio track, expanding comprehension and understanding of TV programs and videos. Lambert, Boehler and Sidoti (1981) have asserted that the constant general movement indicates that information coming through two input types (e.g., dialogue and subtitles) is more systematically processed than if either dialogue or subtitles are presented alone. This result is in agreement with the dual-coding theory by Allan Paivio (1971), sustaining the usefulness of multiple-channel communication. In the same way, Hartman’s (1961a) findings support the between-channel redundancy theory which suggested that when information is redundant between two input sources (e.g., dialog and subtitles), comprehension will be superior than when the information is coming through one input form, (e.g., dialog). He also gave a description of redundant information as identical information from the visual and verbal stimuli. In this respect, Hartman completed that the benefit of the multiple-channel learning system is this: information coming from two information sources is more comprehensible than that through one. Information input through different sensory channels supplies receivers with additional stimuli reinforcement to guarantee that more complete learning happens. More explicitly, the additional stimuli reinforcement helps out learners in systematizing and structuring the incoming information.
However, a contrasting theory, the single channel theory proposed by Broadbent (1958), states that human can only process information throughout one channel at a time. This theory assumes that the decline of learning takes place if the information is received through two or more sources. The learning is delayed when the multiple-channel presentation of information is used in the teaching-learning process. Along with this contentious viewpoint between the single and the multiple-channel presentation, an awareness of and interest in the use of multimedia resources have been increasing, like the presentation of subtitled materials.
Today, language learning has turned out to be more available by implementing multimedia with spoken information and full visual context, such as subtitles. For instance, subtitled videos representing words and pictures in an aural and in a visual form are more probable to activate both coding systems in the processing than words or pictures alone. The dual-coding theory proposed by Paivio (1971) suggests that when pictures are added to the meaning, the number of signals connected with the message increases. Viewers then will be more probable to keep the message in mind. Therefore, the results of the past research appear to sustain the aspect that the use of subtitles causes multi-sensory processing, interacting with audio, video and print mechanisms. These information input foundations get the process of language learning better, improve the comprehension of the content, and increase vocabulary by looking at the subtitled words in meaningful and stimulating circumstances. In addition, a lot of teachers consider subtitles shed some new light on a better way of using various multimedia in the ESL classroom. When subtitled technology appeared more than 15 years ago, many educators quickly saw value in exploiting its potential in helping students process language in a different way and effectively by means of the printed word. (Goldman, 1996; Holobow, Lambert, & Sayegh, 1984; Koskinen, Wilson, Gambrell, & Neuman, 1993; Parks, 1994; Vanderplank, 1993).
Subtitles, which are English written subtitles on instructional English-as-second-language (ESL) videos in this study, are the written version of the audio constituent that permits dialogue, music, narration and sound effects to be shown at the bottom of the screen on most televisions. There are two kinds of subtitles explained in general terms: the open subtitle and the closed subtitle. Closed subtitles refer to the subtitles that are not automatically visible to the viewer; however can be viewed by turning on through use of a remote control or an electronic subtitle decoder. By contrast, open subtitles are visible to all viewers without turning them on with a remote control. Subtitling is not only the main function of the TV but a positive function of video tapes. Video tapes offer subtitling by those who specialize in computer workstations. To make subtitles visible, an electronic subtitle decoder is obligatory, that is easily attached to a television set. Although it is not available in some areas of the world, subtitling technology is broadly accessible and draws great attention in the United States. In 1990, the U. S. Congress passed the Television Decoder Circuitry Act requiring that all new televisions, thirteen inches or larger, be prepared with subtitle decoder circuitry. The function of the decoder circuitry is to receive, decode, and show closed subtitles from cable, DVD signals and videotape appropriately. With reference to this regulation, the consumer is no longer required to pay for a separate decoder, when in possession of an applicable TV set. Therefore, thousands of people in the U.S. have access to subtitles without any trouble by pushing the button on the remote controls (National Subtitleing Institute, 1989). However, available access of subtitles on foreign film videos is still restricted in other countries, such as Taiwan and Japan, where external subtitle decoders are necessary for viewing.
Subtitleing was devised initially for the hearing impaired. The statistics on the number of decoders sold confirm that more than half were bought for the hearing impaired who assert that decoders are helpful to them. Increasingly, the use of subtitles has also augmented among the non-native speakers who are motivated to improve their language learning. A study by Hofmeister, Menlove, and Thorkildsen (1992) discovered that 40 percent of people other than the hearing impaired buy the decoders, such as foreign students. To be explicit, the motive for this phenomenon is that subtitles show words in a motivating atmosphere where the audio, video and print media help viewers comprehend the unknown words and meanings in their context. However, subtitles have a great impact on comprehension improvement of specific TV programs and improve English language learning progressively.
For the benefits of the multimedia approach, ESL programs began to incorporate subtitled materials into the curricula to help ESL students’ language learning. The focus on teaching techniques and on means of optimizing students’ comprehension of the second language has been of great concern through this multimedia. Koskinen, Wilson, Gambrell, and Neuman (1993) stated that the subtitled video is a new and promising approach for improving students’ vocabulary, reading comprehension, and motivation. Other researches have been conducted to inspect whether subtitled TV and video improve or obstruct students’ learning. The results have indicated that subtitled TV and videos are helpful for the hearing impaired, ESL students and disabled students (Bean & Wilson, 1989; Borras & Lafayette, 1994; Ellsworth, 1992; Garza, 1991; Goldman, 1996; Goldman & Goldman, 1988; Markham, 1989; Nugent, 1983; Parlato, 1985; Price, 1983; Vanderplank, 1991; Webb, Vanderplank, & Parks, 1994; Wilson & Koskinen, 1986).
Despite a large number of studies suggesting/demonstrating the benefits of the use of subtitles for the hearing-impaired, language learners, and disabled students, similar studies on the use of English subtitles in English teaching are still limited in Iran. Thus, there is great scope for additional examination into the potential use of subtitled television videos to enhance language teaching to English-as-Foreign-Language (EFL) students. The design of this research elaborates mainly on the language learning achievements.
This study adds to the aforementioned to investigate the exposure of target language input to students through the presentation subtitled videos. This research focuses on the absence or presence of 10 English subtitled ESL instructional video episodes for a period of five weeks as a primary variable in an experiment to help determine the conditions for the improvement of Iranian college students’ learning English as a foreign language in Iran.
Many people in Iran have problems when it comes to communicating with foreigners in English. In addition, to get information from the Internet, having a fair amount of English knowledge is required. That makes accessing information a problem for those with limited English language proficiency. In addition, those Iranian students who wish to study abroad, language is the main problem since they have studied in Farsi for all their educational life, and thus adapting to a non-Persian environment is consequently very difficult. Students in Iran, start learning the Basic English at their secondary schools, however the curriculum structure, is based on teaching grammar rather than oral skills; therefore, most students’ oral communication skills are limited. .
Moloney (1995) states that the emergence of English in the global market has resulted in the current ardor for learning English in developing countries. The need for English in Iran is unique. English is not only a required course for Iranian students, but also required and tested as part of major entrance examinations in Iran. These mentioned issues are going to be considered in proposing subtitles in videos and English learning movie solution.
The purpose of this study is to investigate the effectiveness of subtitled videos in enhancing university students’ language learning in Iran [English as a foreign language (EFL)]. In the study, the term language learning represents two types of performances. The first is students’ content comprehension of a particular video episode, as evaluated by a Content Specific Tests (CST) and the second is to investigate the learner’s vocabulary acquisition.
Teachers’ professional development activities always focus on those types of teaching strategies that help students improve along with their path of learning process. As the research has been designed to discover the effectiveness of presenting subtitles on the movies on vocabulary acquisition and content comprehension, it would be of much significance if confirmed that this strategy works. Generally speaking, it can also been resulted that the finding of this research also could be added to the body of language teaching, learning and use of multimedia technology knowledge. The findings of this study can be share with the curriculum designers, EFL/ ESL teachers for the technology to be implemented in the classroom, materials developers for English teaching
This study focuses on English language learner’s performance on the Content-Specific Tests (CST) of vocabulary, and content comprehension of videos with and without subtitles. The researcher tested each of the following null hypotheses as she controls the initial differences of the participants in their general English proficiency.
Ho 1: There is no significant difference on the scores of the content comprehension subtest of the CST between subjects watching videos with subtitles and those watching videos without subtitles.
Ho 2: There is no significant difference on the scores in the content vocabulary subtest of the CST between subjects watching videos with subtitles and those watching videos without subtitles.
1.6 Research Question
1. Does presence of English subtitles on the videos help learners improve their vocabulary significantly?
2. Does presence of English subtitles on the videos help learners improve their content comprehension significantly?
3. Does presence of English subtitles on the videos help learners improve their English language proficiency significantly?
The definitions are given here to make sure uniformity and understanding of these terms throughout the study.
Subtitle is the spoken words designed for the deaf and hearing-impaired people helping them read what they cannot hear. The terms subtitles and subtitles are interchangeably used in this research and are described as the translations of the spoken words to the written with the identical language shown at the bottom of the screen.
A subtitle of spoken words viewed by a special decoding device installed in the television set or a special decoder machine.
A subtitle of spoken words that always is printed at the bottom of the screen.
An instrument designed by the researcher for this study used to measure learners’ overall comprehension in terms of vocabulary and content comprehension of a particular video segment. The CST includes the two subtests: vocabulary and content comprehension.
The vocabulary that comes into sight from the particular video piece viewed by the subjects
Content comprehension that focuses mainly on the whole story script and test viewers’ comprehension of the particular information shown in the video
The researcher encountered difficulty in access to the samples of all Iranian population of EFL learners since the country is very huge and the numbers of English learners are so many. It was very hard to control teachers’ inside-class activities based on the methodologies presented to them. Non-generalizability of the findings to all English learners, especially ESL learners is another which is because the research is conducted in an EFL (Iran) context. The last but not the least limitation is the material choice since there are various types of videos. Therefore, the researcher had to restrict the video to an instructional video, connect with English since it is both with and without subtitle available as well as being suitable for the proficiency level of the participants.
This study is divided into five chapters. Chapter I introduces the foundation for this research, the purpose of the study, and definitions of key terms used throughout the study to diminish potential misunderstanding.
Chapter II presents a review of the literature of the use of subtitles. It starts with a theoretical review of the cognitive information processing relevant to the single channel theory and the multiple-channel theory, with focus on the cue-summation theory, the between-channel redundancy theory, the dual-coding theory and the capacity theory. It then keeps on with a discussion of the schemata theory, the Comprehensible Input Hypothesis by Krashen and the ACT Model by Anderson. Subsequently, the relevant major research on subtitles for the hearing-impaired, disabled, normal reading ability, and language learners is offered.
Chapter III outlines the method of hypotheses testing formulated in Chapter I. It also includes the research design, followed by a description of the subjects in this study, the treatment materials employed, the testing instruments, the data collection procedure, and the details of the data analysis applied.
In Chapter IV, the analyses are performed to reveal the research hypotheses are explained in detail, with the quantitative results of these analyses and an interpretation of the results.
The final chapter, Chapter V, summarizes the findings of the study in light of research hypotheses and discusses the performance of the subjects and the results of the analyses shown in Chapter IV. The conclusion interprets the effect of subtitled videos on EFL students’ language learning in relation to their listening and reading comprehension and their vocabulary. To synthesize the conclusion of this study, pedagogical implications, the limitations of the study and further research are presented.
In many communities around the world, competence in two, or more, languages is an issue of considerable personal, socio-cultural, economic, and political significance. (Fred Genesee McGill University, WHAT DO WE KNOW ABOUT BILINGUAL EDUCTION FOR MAJORITY LANGUAGE STUDENTS). Historical documents indicate that individuals and whole communities around the world have been compelled to learn other languages for centuries and they have done so for a variety of reasons such as language contact, colonization, trade, education through a colonial language (e.g., Latin, Greek), intermarriage, among others (Lewis, 1977). The term "learning" has been considered in different ways by psychologists throughout history. Some behaviorists believe that "learning is a relatively permanent change in behavior which occurs as a result of experience or practice". In addition Iranian students consider the radical-changing world as a situation of globalization that makes them study English as their second language and also a key to main language of scholarship. Thus Iranian government obliged students to start studying courses in English from early primary school through to university over a course of about 7 years. Despite this, reports show poor linguistic results; thus there is a requirement for an in-depth analysis of the teaching methods to understand the reasons for failure.
Analyzing the process of effective learning, usually this is divided into two different components, first is individual interest in a topic and the second part is situational interest (Hidi, 1990). Individual interest is said to be the degree to which the learner or the reader is interested in a certain topic, subject area, or any special activity (Prenzel, 1988; Schiefele, 1990). Situational interest is explained as an emotional state aroused by situational stimuli (Anderson, Shirey, Wilson, & Fielding, 1987; Hidi, 1990). The literature shows that the individual interest of the reader learner has a positive influence on text comprehension (Anderson, Mason, & Shimey, 1984; Asher, 1980; Baldwin, Peleg-Bruckner, & McClintock, 1985; Belloni & Jongsma, 1978; Bernstein, 1955; Entin & Klare, 1985; Osako & Anders, 1983; Renninger, 1988; Stevens, 1982).
However these researchers defined individual interests as the relatively long-term orientation of an individual towards a type of object, activity, or area of knowledge. This is why exciting tools such as movies seem to have positive effect on learning. (Schiefele, 1987). Schiefele also believes that individual interest is itself a domain-specific or topic-specific motivational characteristic of personality, composed of feeling-related and value-related valences. Then, individual interest is naturally generated by a text that constitutes a ‘feeling of enjoyment or involvement’. Individual interest motivates the learner to become involved in reading the specific subject matter.
Fransson (1977) indicated that students who were interested in a special topic exhibited and showed deeper processing of a related text. Using free recall and extensive interviews, Fransson found that high-interest subjects made more connections between both different parts of the text and also between what was read and prior knowledge or personal experience. Benware and Deci (1984) and Grolnick and Ryan (1987) arrived at almost the same results, demonstrating that topic-interested – We shall also call it ‘intrinsically motivated’ – students exhibited markedly greater conceptual comprehension of text content in contrast with non-interested and extrinsically motivated students.
The process of the ‘language learning’ is seen as a complicate cognitive skill. According to Neisser (1967), cognitive psychology considers that all information passes a process through which the sensory input is transformed, reduced, focused, stored, recovered and used.
Gardner and Lambert (1972) are said to be pioneers in the investigation of socio-psychological aspects of second-language learning. They conducted numerous studies on the relationships of attitudes and social-context to the process of learning a second language. They proposed a distinction between these two models: integrative and instrumental motivation. The former is defined as a full identification by the learner with the target-language group and readiness to be identified as part of it. The latter indicates interest in learning L2 only as a tool to procure a better future through social mobility; in this case the learner does not identify with the target-language speakers. However integrative motivation is often considered more likely to lead to success in second language learning than instrumental motivation. Bandura’s (1986) and Zimmerman’s (1989).
In particular, some of cognitive theorists believe that information-processing theory has the concept of capacity theory within itself. They suggest that the human capacity for learning a language is not regarded as an apart and disconnected from cognitive processes. According to Beck and McKeown (1991), most research on vocabulary leaning has focused on written text, probably because vocabulary research has developed under the umbrella of reading research. Having this fact in mind that arousing interests causes effectives in learning, is supported by a number of studies which have clearly indicated that television programs and movie videos may also be used as a motivational tool to affect teaching techniques in the field of language learning, especially in the area of vocabulary learning. For instance, Rice and Woodsmall (1988) found that children learn words from their first language when watching animated films with voice-over narration. Such learning can be further improved when the films are subtitled, i.e., when voice is accompanied by orthographic information. Schilperoord, Groot, & Son (2005). Researches shows that in countries like the Netherlands, where almost 20% of all programs on Dutch public TV and commercial televisions are ‘foreign’, learners are provided with opportunities to learn foreign languages, especially since the 1980s, when the teletext was introduced. Similarly, Koolstra and Beentjes (1999) maintain that in the small language communities, a considerable number of television programs are subtitled, causing and creating the possibility of vocabulary acquisition not only in one’s first language but also in his foreign languages learning process. Actually, the use of television programs and movie videos for educational purposes is not new. What researchers are interested in is how much learners can learn from films and television programs, and what factors influence the amount and kind of learning and how much. According to Reese & Davie (1987) to address this concern, researchers have examined features like message structure and format characteristics to identify those which best facilitate learning. Reese & Davie report studies which suggest that visual illustrations are most effective when they are accompanied by the script.
Looking at socio-cultural factors ‘attitude affecting’ in success of learning, however the combinations of traits explain the use that the learner makes of the available learning opportunities, all of which affect L2 learning. Wong-Fillmore (1991) indicates three main factors affecting L2 learning: the need to learn the second language, speakers of the target language who provide learners access to the language [cultural openness], the social ‘setting’ that brings learners and target-language speakers into contact frequently enough that makes language developments possible [social openness, cultural openness, interaction between learners and target-language speakers]. Clement (1980) also places great emphasis on the L2 learners’ motivation and the cultural milieu. In Clement’s model, primary motivational process, is defined as the net result of two opposing forces—integrativeness minus fear of assimilation. Integrativeness refers to the desire to become an accepted member of the target group; fear of assimilation refers to the fear of becoming completely like the other culture and losing one’s native language and culture. Fear of assimilation along with fear of ‘loss of one’s native language’ and heritage may weaken L2 learning motivation, especially in the countries like Iran where people are brightly proud of the history and heritage. Schumann (1986) suggests a model focusing on a cultural aspect of learning that he terms “acculturation,” that is, integration of the social and the psychological characteristics of learners with those of target-language speakers. Under this heading, he classifies the social and affective factors cluster both as a single variable. According to Schumann, there are two factors in acculturation [social integration & psychological openness] namely, sufficient contact and receptiveness between members of target-language and L2-learner groups.
There are clearly a number of common features between the above models. They all include the effect of social context attitudes (integrative or instrumental) and acculturation. A problematic social context usually affects L2 learning negatively, especially when the learners are minorities learning L2 as the language of the dominant group like it seems to have the same role with English language as a semi-dominant language of the world especially in contrast with the middle east languages. However, learners’ awareness of the necessity for learning the L2 affects their success positively even if it symbolizes a conflict between the minority and the majority. L2 learners apply instrumental motivation, which operates as a meta-cognitive strategy whereby they persuade themselves to engage in L2 learning even though they have no liking for the language and the culture (Abu-Rabia, 1991, 1993; Bandura, 1986; Zimmerman, 1989).
Looking to the movies and TV programs as a motivational tool in learning a language, and based on a justification of the outperformance of students exposed to subtitled video theories are grounded in research either on the single channel theory or on the multiple-channel theories. Multiple-channel theories hold an overview of the cue-summation, the between-channel redundancy theory, the capacity theory and dual-coding theory. Moreover, the schema theory, the Krashen’s Comprehensible Input Hypothesis and the ACT model by Anderson are also evaluated in the following part, attending to how information processes and learning happens.
According to Bartlett (1932), a schema is defined as a ‘store of perceived sensory information’ in memory. He explains that schemata are formed and culturally regulated. "As the number of schemata increases, one is able to recall an ever-larger amount of information in minimum time; adapting new information to an appropriate schema allows one to remember new and important ideas" (Rumelhart, 1981, 1984). However consistency with an existing schema leads to understanding and inconsistency generally causes problems in the comprehension process. Schemata can impede and slow down reading comprehension and memory; details that are inconsistent with one’s schema are deleted, or transformed, and rationalized to fit the existing schemata in the memory. On the other hand, schemata can also play a facilitating role when their details are consistent with the reading content; in this case cognitive processing occurs quickly without serious obstacles (Anderson, 1987; Van Dijk & Kintsch, 1983). Researchers usually compare reading of culturally-familiar and unfamiliar stories by students from different ethnic backgrounds. Results have shown that students’ comprehension of cultural stories is a function of their cultural familiarity with these stories (Abu-Rabia, 1991, 1993, 1995; Abu-Rabia & Feuerverger, 1996; Adams & Collins, 1977; Anderson & Gipe, 1983; Anderson, Reynolds, Schallert, & Goetz, 1977; Baldwin et al., 1985; Carrell & Eisterhold, 1983; Lipson, 1983; Paul, 1959; Reynolds, Taylor, Steffensen, Anderson, & Shirley, 1982; Steffensen, Joag-Dev, & Anderson, 1979; Yousef, 1968; Zegarra & Zinger, 1981).
However, learners’ awareness of the necessity for learning the L2 affects their success positively even if it symbolizes (according to Abu-Rabia’s above) a conflict between the minority and the majority. Second language learners apply instrumental motivation, which operates as mentioned like a meta-cognitive strategy whereby they persuade themselves to engage in L2 learning even though they have no liking for the language and the culture (Abu-Rabia, 1991, 1993; Bandura, 1986; Zimmerman, 1989).
The single channel theory is based on the principles that the human processing system has limited capacity in the central nervous system (Travers, 1964). Broadbent (1958) first developed the inclusive ‘theory of attention’ that is grounded in the idea that humans are at a time only capable of processing information throughout one channel. The human processing system can not process two information channels concurrently. Broadbent explains that there is only one channel that connects the senses to the central nervous system. Therefore, when bimodal presentations of information are implemented in the teaching-learning process, a decrease in learning takes place. Broadbent gives details that this phenomenon occurs as a direct result of a filtering course taking place in the central nervous system. Therefore, if information comes simultaneously in separate channels, an overload takes place resulting in a filtering process which lets only information from one channel to be received. Broadbent assumes that sensory information that is unattended is filtered out without being analyzed. Subsequent experiment using selective listening procedures also produced evidence contradicting this assumption. It was found that attention sometimes is attracted to an unattended channel in which the subject’s own name is spoken too. But however in elaborating on and developing Broadbent’s model, Travers (1964) studied on the human information processing capacity. He questioned if there is any benefits gained by utilizing bimodal presentations simultaneously. Based on his results, there were no gains attained when the recalling of common words presented simultaneously by vision and audio was compared with words presented by either vision or audio alone. Actually, Travers declared that multiple-channel communication led to jamming of the system and a decrease in communication (Severin, 1967). The evidence from Travers (1964) supports the Broadbent’s study results but in some differentiations.
Furthermore, there are yet a lot of researches that seem to support the single channel theory. Fleming (1970) claimed that many teaching-learning processes, attempting to utilize multiple channels, excess the presentation with stimuli. Learners get so mixed up by the stimuli coming in diverse channels that the decrease of production happens. This overloading through multiple channels of information would cause ineffective learning and comprehension. In general, one may conclude that information coming from more than one channel will hinder learning.
In brief, Broadbent’s (1958) single-channel theory opposes the multiple-channel theory and proposes discussions against the use of subtitles (Van Mondfrans & Travers, 1964). According to Broadbent, presentation of the identical input information (e.g., dialogue and captions), arrived at simultaneously through both channels proves no difference in the performance (recognition or recall) when compared to information presented by either dialogue or captions alone. Therefore, it is believed that information entering from one channel would cause interference with that in another. This supports the single channel theory, claiming that the multiple-channel theory should not have a positive effect on learning comprehension.
The single channel learning theory that mentioned before did not hold up the perspective that an additional input information channel to transmit a message increases the amount of communication and comprehension. Yet, a disparate theory, the multiple-channel theory, involves at least two of the channels under reflection (Hartman, 1961a), proclaiming increase in comprehension for learners in the course of interacting with any combination of the different available sensory channels (Hsia, 1971).
Yongqiang et al, (2006) explained in IEEE Conference that a discrete time multi-channel learning controller is effective to be proposed. This multi-channel method aims to extend the learnable frequency band of a learning control system and thus improve the learning performance. In each channel, a simple learning controller is used. More channels with such simple learning controllers produce wider learnable band. In their paper two illustrative design approaches are demonstrated via an example of robot joint control. Experimental results verified the effectiveness of the multi-channel approach and the discrete time control design.
Dwyer (1978) refers to the multiple-channel theory as simultaneous presentation of stimuli in different sensory channels. Burton, Moore, and Mayers (1995) conducted a research and examined the learners’ ability to access aural and visual stimuli concurrently and how much information will possibly be processed. Their finding approached the conclusion that the multiple-channel learning theory positively supports learning. In a sense, educators have confined the importance of the multiple-channel learning and chose appropriate media for instructional use.
There is a lot of research that supports the helpfulness of multi-channel learning. Yongqiang et al, (2006) that was explained above was the latest. Hartman’s (1961a) results pointed out that learning is improved when both audio and written means are in work simultaneously. However, he proposed that print alone ought to be utilized by literate learners for complicated materials, while audio is helpful when the material is simple or the learners are illiterate. Furthermore, Levie and Lentz (1982) argued that attributes and information from the two channels even strengthen each other and improve both recall and comprehension. In general, this conceptualization is discussed based on the theories offered in the coming section that hold up multiple-channel learning theory, together with the cue-summation theory, the between-channel redundancy theory, the dual-code theory, and capacity theory.
Multiple-channel learning argues that learning is enhanced as the number of available cues or stimuli are increased. Miller (1957) sustains the cue-summation theory by clarifying: "When cues from different modalities (or different cues within the same modality) are used simultaneously, they may either facilitate or interfere with each other. When cues elicit the same responses simultaneously, or different responses in the proper succession they should summate to yield increased effectiveness. When the cues elicit incompatible responses, they should produce conflict and interference, (p. 78)".
Hartman (1961a) indicated that the greater the number of cues, the more chances there are to make comprehension between one stimulus and the other. Hartman pointed out that an increase in comprehension takes place in two conditions. First, when redundant input is conveyed at the same time through print and audio materials, cue-summation supports comprehension. Furthermore, when cognitive association assists linking the information units, as in a successful verbal label for an unclear pictorial channel, cue-summation is probable to strengthen comprehension and improves learning. In contrast, Hartman also discovers that interference is probable to happen when information between channels is not related.
Integrating as many cues as possible within channels in the research, Severin (1967) elaborated that the subjects will be provided with more information from which to learn as the number of available cues or stimuli is augmented. Severin also suggested that cues are necessary to be relevant to add to existing information, which supports Hartman’s result claiming that when cross-channel cues are unrelated, interference will take place and result in a lesser comprehension and communication.
Researches that provide reasonable support to Miller, Hartman and Severin considerably point out available cues relevance is a significant trait affecting comprehension and learning. The application of cues can elaborate attention to the areas that are significant to instructional objectives and can help reimburse for learners who require the relevant schema while meeting the video information. Furthermore, if illustrative cues are redundant with the audio input, reviewers will be more prone to recall the information.
Shimozaki et al, (all et al has to be in italics) (2001) believe that there are many models of visual processing assuming that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features as explained above to be known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases (revise this section to clarify) Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. Shimozaki in his work on a visual search task with spatial frequency and orientation as the component features, selected a particular set of stimuli so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
The multiple-channel presentation engaging two or more of channels under consideration (e.g., audio and visual) improves comprehension. This might be attributed to the between-channel redundancy (BCR) theory. Berkay (1994) believes that when information is redundant between two input sources (e.g., caption and dialogue), comprehension will be better than when the information is only presented through one input source (e.g., dialogue). Discussing the hypothesis of this theory, Hartman (1961b) roughly defined redundant information as the same unit of information presented simultaneously in two or more sources, like captions and dialogue. In addition, Hartman discovered that the input information in each channel ought to be intimately related when the effectiveness of multiple-channel learning is the heart of consideration. He defined redundancy as containing four levels: redundant, related, unrelated and contradictory. When the messages are unrelated or contradictory, they struggle for attention. Consequently, interference is shaped. When the messages are related and redundant, they complement each other to enhance learning (Hanson, 1992; Ketcham & Heath, 1962). Hartman in his research concluded that "redundant information simultaneously presented by the audio and print channels is more effective in producing learning than is the same information in either channel alone" (p. 42).
Likewise, in exploring the condition for improving learning, Hsia (1971) concluded that redundant information promotes the dimensionality of information that leads to effective learning. Petterson demonstrated with reference to the conclusion that between-channel redundancy can smooth the progress of information processing, that learning competence is much improved when words and visuals work together and provide redundant information.
The between-channel redundancy expecting superior performance from audio-print resources, decreases both errors and information loss, as well as increasing recall once one channel supplies cues for another (Hsia & Jester, 1968). Therefore, since the viewer obtains two instances of the same source, learning and comprehension is enhanced.
Generally speaking, redundant input information is usually presented between audio and video channels on television. A lot of research on learning from newscasts discovered the findings that the between-channel redundancy helps learning (Mann, 2005; Drew & Grimes, 1987; Findahl, 1971; Reese, 1983). However Hanson (1989) demonstrated through elaborated on instructional television programs, that, "probably the most basic and important of those generalizations is that redundant audio and video usually enhance learning" (p. 15). Thus captioning, presenting an extra input information source and providing parallel input information has been shown to help information processing. Since the additional information source (captioning) is not in disagreement with the auditory input, the difficulty of learning is facilitated rather than enlarged.
Differing from the viewpoint on the between-channel redundancy model, Reese (1984) claimed that redundant oral and written information possibly will obstruct learning, as viewers have to cope/process with two channels at the same time. In the same case, the result of Hartman (1961b) moreover recommended that adding a connected source of information to two redundant sources might decline the effect of between-channel redundancy. In addition Hartman commented that when a related supply of information like the picture has been inserted to two redundant information sources that are captions and dialogue, performance is less significant than that produced by the two redundant sources (captions and dialogue) only. Nevertheless, Berkay (1994) argued that adding the related information source (picture) does not constantly decline the effect of BCR to weaker achievements in performance. With reference to the previous researches (Reese, 1984, Ruggiero, 1986), results proved that the related information source (pictures) has interferences with the processing of the two already redundant information sources (captions and dialogue) at a high presentation rate. In other words, the reason for the related information resource (picture) that interferes with the learners’ successful comprehension is relevant to the presentation rate used with the captions or subtitles.
Overall, Berkay (1994) argued that the between-channel redundancy theory has the analogous aspect of information processing to the dual-coding theory by Paivio (1971) who emphasized that information can be characterized comprehensibly by both video and spoken codes.
The dual coding theory developed by Paivio (1986) proposes that that there are Fundamentally different types of information stored in working memory; he calls them imagens and logogens. Imagens denote the mental representations of visual information in analogue codes, while logogens denote the mental representations of language information in symbolic codes. Research with PET scans and fMRI supports this theory by showing different parts of the brain are activated by the respective forms of stimuli. This much referenced theory is important because it suggests that information should be presented with consideration for how it will be processed, stored, and retrieved. If, for instance, information is presented in a text format when it will be converted and stored in a visual representation, it would have been more efficient to present it visually.
Chapman (2006) summarizes the research on what has been shown to be more efficiently processed and retained when represented with images and with words. He states that images are better for spatial structures, location, and detail, whereas words are better for representing logical conditions, and abstract verbal concepts. So Allan Paivio (1971) as an endeavor to clarify how learners employ associations once the pictorial and linguistic information is being processed in a different way. Paivio also declared that information could be represented by pictures and words simultaneously. (revise sentence and justify) The two information sources activate two coding systems: visual and verbal codes that are functionally autonomous and interconnected by referential links. This theory supposes that there are two cognitive subsystems: one is specific for verbal information and the other is specific for non-verbal information.
Paivio also referred to the nonverbal subsystem as an imagery system, since its significant function comprise the scrutiny of scenes and the production of mental pictures. The verbal system copes with language focused representations which contain auditory, visual words and writing patterns of these words.
Functionally, the two subsystems are independent, meaning that either can operate without the other or both can operate parallel to each other. Even though independent of one another, these two subsystems are interconnected so that a concept represented as an image in the visual system can also be converted to a verbal label in the other system, or vice versa, (p. 222).
The notion of autonomous imagery and verbal representation systems is at the center of Paivio’s dual-coding model. With reference to this model, Paivio assumed the two different recreational units: imagens meant for mental images referring to any tangible or concrete stimuli and logogens meant for verbal entities including abstract stimuli that include unclear visual as well as verbal information. Paivio argued that imagens are coded twice, both as images and corresponding verbal labels, while logogens are more difficult to picture and are coded only as verbal information. DCT declares that imagens are quicker and easier to recall than the logogen that is structured in detached, sequential units. However, dual-coding theory developed from Paivio’s studies on the function of imagery in associate learning, supporting the learning effectiveness of illustrated texts and recommending that information is much easier to hold and retrieve as soon as dual-coded systems takes place. The result shows that the accessibility of two mental demonstrations instead of one improves comprehension. Furthermore, Paivio states that a blend of the spoken and the written word could trigger the verbal and imagery subsystems at the same time, contributing to an increase in performance in comparison with single presentation.
Mayer and Anderson’s (1991) studies have supported Paivio’s argument, enlightening that narration linked to visual information augmented recall, problem solving and transfer. Baggett and Ehrenfeucht (1983) showed that once college students were presented with both visual and verbal/auditory input, there was no competition for resources. They concluded that information encoding in one medium does not hamper the other. In searching of additional results connecting to DCT, Rieber (1994) verified that both imagery and verbal systems give out diverse functions, storage processing characteristics and memory units that are activated independently. Despite the inequality between the two coding systems, there are interconnections between the imagery and verbal by referential links. Analogous outcomes have been gained by Chun and Plass (1996a), claiming that DCT assisted learners acquire more efficiently. The results discovered that as soon as both verbal and visual resources were presented to the learners, they could create referential links between the two forms of mental representation, like the verbal and visual representational system. To gain, Chun and Plass (1996b) conducted a research on the reading comprehension with multimedia, proving the dual-coding theory.
Most of the researches have been elaborated in the area of the use of multiple channel sources, like subtitled videos showing pictures and words in an auditory and visual form. Danan’s research (1992) elaborated on the relationship between reserved subtitling (L1 audio and L2 subtitles) and dual coding theory. The result proved that the use of both reversed subtitling (L1 audio and L2 subtitles) and bimodal input (L2 audio and L2 subtitles) increase language learning.
Paivio (1971) and Morton (1980) have supported the notion that the blend of the spoken and the written word would trigger both imagen (pictures) and logogen (graphics) systems, contributing to an augment in performance in comparison with single presentation. Mayer (2001) also supported that dual channel commitment is better than single channel processing of verbal or text alone. DCT proposes that when pictures are supplementary to the message the number of cues connected to the message increases. However, the use of subtitled videos demonstrating pictures and words in an auditory and visual form simultaneously increases the comprehension for viewers. In addition, when pictorial cues are redundant with the audio information, then viewers will be more probable to remember the message.
Not all researches are in conformity with the multiple-channel theory. Some studies have indicated that there is no significant advantage when some multiple sources arrive at the same time. Nugent (1992) found increased achievement when a combination of both images and audios were employed, however no achievement boosts while audio and text were blended. Multiple-channel learning is still a conflict-ridden issue. One of the probable explanations for contradictory results of multiple-channel learning research is put in the human capacity of the central nervous system. Reed (1999) states that this capacity is described as the amount of mental endeavor required to make a task. "Capacity-based" looking at Cognitive Load Theory is based on the assumption that working memory is limited in its capacity, and that long-term memory can store a virtually unlimited amount of knowledge (Plass, 2009).
Hsia and Jester (1968) concluded that once the rate of presentation channels increased, the subject’s performance is decreased. Based on his literature about the human processing capacity of modalities, Hsia (1971) proved that subjects appeared to make better use of one modality since they had great pressure on the density or speech rate of information presentation.
With reference to his efforts, Hsia proved the two below features:
1. Human information-processing shall be connected with verbal and nonverbal systems at the same time until the capacity of the system is overloaded; and
2. When information input is bigger than the system’s capacity, the system would relapse to a single-channel system (decrease the abilities).
Hsia also stated that when the central nervous system processes a multiple-channel mechanism until processing capacity is gone over; the jamming system supported by Travers (1964) may relapse to a single channel system, therefore, the comprehension reduction and loss of information are inevitable. Hsia also proposed that the capacity is overloaded not only by the number of cues but also by the complexity of the information.
Capacity theory has been researched by a large amount of studies. Kahneman’s research (1973) supported this theory and claimed that there is a sort of limitation on a one’s capacity to complete mental work. In other words, capacity theory suggests that we have limited amount of mental effort to allocate across tasks. This theory is based on Kahneman’s guess that parallel activities are very probable to interfere with each other. Though, the interference comes from different sources. Interference occurs of the requirements of two activities go further the available capacity (Reed, 1999). In conclusion, a capacity model entails that the interference does not address specific sources and depends on the number of requirements of the task.
Supporting this theory, Lang (1995) questioned if students had the attention capacity to read, view and listen at the same time. Lang believes that viewers have a limited-capacity of processing system. He also suggests that television messages can overload this processing system pretty easily and the information held in television message is lost.
From this point of view, for a television message to be completely processed, the input message must be encoded into the short-term memory span. If the system is overloaded, the total message could not be processed. Capacity theory affirms that while the information system is working at the memory capacity, or it is overloaded, processing the input information must be shared between the various processes-encoding, rehearsal and storage at the same time. To put differently, the larger the processing needs for the message, the less the total information will be remembered from the message.
Also Cognitive Load Theory provides a convenient and effective roadmap for evaluating educational materials with regard to the cognitive load they impose and the capacity the learners have at the time. Educators can use the principles that research has identified based on Mayer’s theory as well as on other empirical researches, (such as Plass and others which were explained above) to optimize their educational materials. The main goal of these principles is to reduce the processing of task-irrelevant information, which can be generated either by the design of the learning materials and their presentation, or by the design of the instructional activities and their task demands. The goal for applying the principles to teaching and educational material design is to increase the amount of mental effort involved in the processing of task-relevant materials and construction of related mental models (Plass, 2009).
Lingual link library describes the Definition of schema as:
1. A schema (singular), as understood in schema theory, represents generic knowledge. A general category (schema) will include slots for all the components, or features, included in it.
2. Schemata (plural) are embedded one within another at different levels of abstraction. Relationships among them are conceived to be like webs (rather than hierarchical); thus each one is interconnected with many others.
All the cognitive theories maintain that schemata are hierarchically planned conceptual familiarity in the memory constructions. To define schemata, Thorndyke (1984) commented that a series of knowledge structures that symbolize a general procedure, object, social situation or sequence of events are referred to as schema. In addition, Schirmer (1994) stated that a new notion can be understood by activating a schema, connected to the prior knowledge and experiences. Consequently, the readers can employ schemata to decode meanings from texts all the way through assimilation of a new idea or accommodation of existing notions to integrate new information. Apparently, the more the prior knowledge one contains, the superior s/he is able to access receiving data. Bransford (1979) also argued that schemata may activate the visual and verbal information processing by presenting an abstract form of demonstration, enabling one to strengthen their ability of comprehension and memory.
To put in other ways, a schema is the conceptual information structure employed in better comprehension. Referring to Mayer (1983), a schema helps in selecting and arranging the incoming information into an integrated, meaningful structure which exchanges with schemata. The schema theory asserted by Bransford (1979) and Mayer (1983) characterized four points in general:
Schemata are referred to as structures that are the purposes of understanding facts in a wide variety of situations.
Schemata are the built-in information in the learner’s mind based on the existing learning and experiences.
Schemata are activated during the process of comprehension.
Schemata hold gaps filled in with details of the new information to be comprehended.
In general, schemata influence matters we try to understand and anticipate. The conceptual information organized by schemata theory can be employed in various comprehension processes, like visual and aural context and image processing.
Stephen Krashen suggested the Monitor Model, one of the most significant theories concerning second-language acquisition. According to Krashen (1985), second language learning is very much like the first language learning (L2 ≈ L1). No conscious effort is required to be made to focus on the language. If people faced the target language through listening or reading, the ability to speak and write comes more or less at their own tempo. Much attention has been elaborated on the central role of acquisition of cognitive academic skills in the body of research. The Comprehensible Input Hypothesis (Krashen, 1985) affirms that for the purposes of language acquisition, vocabulary, meaning, pronunciation and structures already known to the student will couple with the language that is new to the student.
The Comprehensible Input Hypothesis is Krashen’s effort to enlighten how a learner acquires a second language. However, the Comprehensible Input Hypothesis is his explanation of how second language acquisition occurs, regarding to acquisition, not learning. Accordingly, the L2 learner enhances through the developmental stages in natural orders when one obtain second language input that is only one level higher than one’s current linguistic competence stage. In other words, if a learner is at a specific stage, then acquisition occurs when they are exposed to comprehensible input that is one level higher.
In other words, second language learners understand messages by obtaining and processing comprehensible input. Therefore, their speech and grammar knowledge is automatically acquired by the input that is both comprehensible and sufficient. The ability to generate language is believed to appear naturally and need not be taught directly. Accordingly, language teachers can provide students with meaningful, authentic, and comprehensible language that is not simply accessible to them so that their acquisitions occur automatically.
To scrutinize the concept that language teaching should emphasize real communication and natural interface, an essential part of the Input Hypothesis is encouragement and provocation. It appears that, the videotape presentation seems to be quite linked to this model, aiming to authentic and real environments as the main element of successful teaching. The idea that captioned or subtitled TV programs or videos are effective educational tools for second language learners somewhat is in accordance with second-language acquisition hypotheses of Steven Krashen. The teaching implications are to provide the class period with a lot of understandable input in order to activate learners’ acquisition reflexively.
“Every film is a foreign film, foreign to some audience somewhere” (Editorial). (This is very good)
Self-evident yet rife with epistemological consequences, this opening remark references not only the foreignness of languages (linguistic and imagistic), for which subtitles offer some elusive form of translation, but also other alterities generated by the medium itself and the cultural differentials between sites of production and reception. Fenner, (2004).
With reference to the television and video technology, captioning has been employed for the purpose of showing the translation of the spoken utterances to written subtitles. All through this thesis, the terms of subtitles and captions are used interchangeably and are described as the translations of the spoken utterance to the written form with the matching language presented at the bottom of the screen. When we speak about captions or subtitles, two captioning technologies are required to be introduced: open captions and closed captions. Open captions, similar to subtitles, are observable to all viewers and they are written words displayed at the bottom of the screen automatically. The TV viewers are not able to prevent the written wording from being on screen. In contrast, closed captions again being presented at the bottom of the screen are the wordings that can be turned on or off by the use of the remote control by the built-in electronic telecaption decoder or an external one. Viewers can not read the captions unless they switch on the decoder.
In the United States television broadcast of closed-captioned programming was founded on March 16, 1980 (Garza, 1991). The National Bureau of Standards (NBS) suggested the idea of open captions in 1971 and supervised the growth of captioning technology. By discovering the practicality of open captions, the Public Broadcasting System (PBS) in Boston recognized television programs with open captions for the public (Carney & Verlinde, 1987). The normal-hearing audience complained against the existence of open captions on television programs for the reason that it made potential distractions. Cronin (1980), in seeking to benefit the hearing-impaired by presentation of captions on the TV screen, PBS initiated to develop the technology of closed captions suggested by the network in 1973 under the control of commission of the Bureau of Education for the Handicapped of the Department of Health, Education and Welfare (HEW). As presented by HEW, the formation of the National Captioning Institute (NCI) made a remarkable involvement in the direction of closed-captioning. In March 1980, they debuted the first closed captions television service 20 hours a week with a number of programs on the American Broadcasting Company (ABC), the National Broadcasting Company (NBC) and PBS (Garza, 1991). Captions could be opened by viewers if the provided their TVs with decoders manufactured and marketed by the NCI. Through a period of time, NCI sustained to expand captioning technology by creating a real-time captioning system that is used in newscasts, sports events and live broadcasts as events were on air in 1982 (Block & Okrand, 1983). Furthermore, exploring a great augment in the quantity of captioned network programming, NCI by accompanying the ITT Corporation in 1989 – expanded the first caption decoder microchip. This was manufactured for the new television sets at the manufacturing stage to make captioning technology extensively successful and accessible to the public. Having access to captioning technology in the television set resulted in the endorsement of the Television Decoder Circuitry Act by U.S. In 1990, the Congress asked all who have new televisions, thirteen inches or larger, to be equipped with the decoder (National Captioning Institute, 1993). Consequently, the consumer no longer needs to buy a separate decoder.
Fundamentally, captioning was invented for the hearing impaired and the deaf. After some time, surveys unfolded that over half of decoders were bought for the people with no hearing problems, like English-as-second-language (ESL) learners. The motive for this trend is that captions place words in an inspiring situation where the audio, video and printed contexts assist viewers to understand new meanings from the presented context. Therefore, the application of subtitls attempts to improve the viewer’s understanding of specific television programs and improve English language learning increasingly. Neuman and Koskinen (1992) have discovered that learners construct correlations between words and meaning throughout TV’s combination of sounds and pictures.
To scrutinize students’ understanding of subtitled television programs or captioned videos, it is essential to address several scopes of research about whether the aplication of captioning helps or obstructs students’ language learning.
To elaborate on the significance of imaging in the deaf education and substantiate the effectiveness of instruction that combines visual and written presentation modes, Nugent (1983) inspected the association between the instructional usefulness of visuals and captioning videotape and film instruction for the deaf and hearing-impaired elementary and junior high school students. He planned three types of instructional experiments counting (1) visuals and captions, (2) visuals alone, and (3) captions alone. The results proved that students viewing visuals and captions were significantly higher in scores than those seeing either captions or visuals alone. This was reliable with the foretold hypothesis affirming the significance of visuals in the deaf education. Further more to the effectiveness of the combination of visual and subtitled instruction to the learners, the research results of Nugent (1983) also showed that the deaf and hearing-impaired students in elementary schools could use both visuals and wordings in captioned instruction. They evidently utilized a plan that allows them to exchange between the two and make use of each to the best benefit. Based on the investigation, Nugent finalized that captions are informative to the deaf and this held up the hypothesis that appearance of input information all the way through both visual and linguistic codes is greater to presentation of information through a single code.
Analogous research was conducted by Wilson and Koskinen (1986) in Chicago. They examine a research in which eight hearing-impaired children improved their reading vocabulary, comprehension, and motivation once subtitled television was used in comparison to regular reading instruction. The findings showed that students possessing the captioned video outperformed the control group of students who viewed the captions only. Wilson and Koskinen in addition, claimed that educational projects had commenced to experiment with the captioned television as a way to expand skills in reading vocabulary, oral reading fluency and comprehension.
Ward, Wang, Paul and Leoterman (2007) evaluated the effects of near-verbatim captioning against edited captioning on a comprehension task performed by 15 children, ages 7-11 years, who were deaf or hard of hearing. The children’s animated television series Arthur was chosen as the content for the study. They began the data collection procedure by asking participants to watch videotapes of the program. Researchers signed or spoke (or signed and spoke) 12 comprehension questions from a script to each participant. The sessions were videotaped, and a checklist was used to ensure consistency of the question-asking procedure across participants and sessions. Responses were coded as correct or incorrect, and the dependent variable was reported as the number of correct answers. Neither near-verbatim captioning nor was edited captioning found to be better at facilitating comprehension.
After finding out about the usefulness of using captions for the deaf and hearing-impaired viewers, researchers have searched the effect of captions on learners who were below-average readers. A research conducted at the University of Maryland by Goldman and Goldman (1988) (check reference) was on the captioning programs for the disabled students. This research proved that elementary school-aged remedial readers not only gained more words while viewing captioned television, but also could recall more words over time. Another parallel study done by the same authors showed a 24-minute program depicting Amazing Stories with subtitles to high school students. This study discovered that students learn to read words only by watching captioned television without receiving any teaching. Koskinen, Wilson, and Gambrell (1986) examined the consequences of four treatments on 77 learning-disabled students to show whether captions were viewed as meaningful. Four groups were investigated while observing (1) captioned TV with sound, (2) captioned TV without sound, (3) conventional TV (only audio), and (4) text of captions. Seventy-seven Maryland public school students with learning disability participated in this study. The video / text materials for the plan consisted of four excerpt from the children’s science television program 3-2-1 Contact. Two experienced teachers viewed videotapes of the children’s series to recognize meaningful sections with subtitles that could be understood without the accompanying audio or video. Evaluation materials were built up for the study. They included word recognition, comprehension, and oral reading trials. The results of this study proposed that both subtitled TV with sound and conventional television were worth exploring as media for improving the reading skills for the learning-disabled students. The audio was not presented to hinder learners’ processing of the subtitles. Actually, the results of this support the multiple-channel theory that processing simultaneous relative information improves learning.
With reference to the other researches (Koskinen, Wilson, Gambrell, & Jensema, 1991) there are confirmations that CCTV (state what this stands for) reduces the complexity of learning new words, viewed as a medium with which learners feel self-assured. Koskinen, Wilson, Gambrell, and Neuman (1993) focused on the effects of CCTV on the acquisition of incidental vocabulary. All participants were randomly assigned to with / without captioned groups, viewing nine science information parts over a period of nine weeks. Three posttests, consisting of sentence anomaly, word recognition and word meaning measures, were administered to assess the participants’ targeted vocabulary. Another questionnaire on viewing TV was also administered to evaluate the subjects’ perception of knowledge gained via the science videos and their opinions of the use of CCTV. The research did not prove a significant diversity on the word recognition and sentence anomaly posttests between the two groups. But the word-meaning examination demonstrated significant differences and supported CCTV. The questionnaire data unfolded that the participants had a very affirmative response to the science videos and to the presentation of CCTV.
Koskinen, et al. (1988) discovered that the majority of the teachers believed subtitled television was a helpful means in vocabulary skills improvement and also for locating information for ESL students and disabled students. They also demonstrated that students especially liked watching captioned programs and thought that watching captioned television helped them learn new words.
Likewise, to examine the use of redundant, subtitled information in improving the learning capabilities of learning-disabled hearing students, Wilson and Koskinen (1986) accomplished a project relating to the probable uses of captioned television. Captioned television was employed as a way to teach reading in three different groups: one group with remedial readers, and the other group with students who spoke English as a second language and the last group with hearing-impaired students. In all of these experiments, the results were found to be alike. The motivation of the learners was very high, and teachers found a lot of chances to increase their reading vocabulary in exciting ways. Having viewed a section of a show a number of times, students in all three groups could read a lot of the captions smoothly. It has also reported that students’ comprehension also improved.
Since captioned television has been found to be both motivating and improving for the viewers whether hearing impaired or for the regular, many researchers have planed a hypothesis stating that there is a significant outcome on enhancement of second language learning in terms of reading, listening, and content comprehension. Captions offer language learners the chance to read words and comprehend the text in a meaningful and motivating context when they are learning a language. So many researches conducted in this area have come to similar conclusions which assert that captioned television develops listening and reading comprehension, vocabulary acquisition, word recognition, decoding skills and motivation to read (Parks, 1994). Furthermore, so many language teaching studies have emphasized the significance and advantages of first language (L1) and second language (L2) subtitle presentation to some of the particular language skills. All the findings have demonstrated that the advantages of subtitles can broaden to even those without any hearing troubles, including those with special educational needs, such as English-as-a-second-language (ESL) and English-as-a-foreign-language (EFL) students. In the following part, captioning researches have been outlined for the purpose of reviewing the literature on the studies about the use of second language captions in language learning gains in terms of listening, content, vocabulary and reading.
Garza (1991) studied the language learning benefits of spoken and printed text in one medium employing captioning with 35 advanced ESL students and 20 Russian learners. The findings showed a significant correlation between the captions presence and improved comprehension of the linguistic content of the video. He mentioned that the captions could help make connections between reading and listening comprehension.
Markham (1989) examined the effects of captioned television videotapes on all levels of ESL students’ listening comprehension. 76 university-level ESL pupils took part in the study. The results represented considerable comprehension benefits from viewing videos with captions among these groups. Diversely with the researcher’s supposition that the advanced students can probably be distracted by the captions much more than beginners and intermediate students, the advanced students made more advantages from the captions as any other groups. Furthermore, the findings did not point out another supposition that beginners could understand neither the subtitled episodes nor the un-subtitled episodes for their novice level language ability. Conversely, the beginners outperformed with the appearance of subtitles. In brief, the researcher revealed that the multisensory characteristics of captioned television (e.g., sound, print and video) appeared to enable ESL students to view words in a meaningful and motivating context.
However, Rees (1993) performed a study at the International Language Institute of Massachusetts to discover if subtitles increase meaningful comprehension to the language learners. The findings of this research proved that Chinese and Japanese students of ESL, watching CCTV news programs and sitcoms, increase vocabulary, improve their listening comprehension.
In an experimental study by Shen (1993), with participation of 72 college freshmen in English classes, he employed a computer-based interactive videodisc system to show the effects of second language captioning used on listening comprehension. The researcher reported captions as an assist in which students generated answers with. To put it differently, Shen discussed that the group exposed to subtitles performed much better on the posttest than the group with no such exposure.
Dow and Price (1983) conducted an experimental study highlighting the relationship between subtitled television shows and the foreign language learning. They were trying to inspect whether foreign language learners of English might gain advantages from closed captioning television and videos in the process of their language learning. 450 language learners from 76 diverse native language backgrounds took part in this study. The findings of the study showed that subtitles significantly developed their general comprehension of the linguistic information which was included in the videos. Moreover, the researchers affirmed that subtitled videos help facilitate the process of learning by providing English learners with the native speakers’ cultural.
Huffman (1986) also suggested that subtitled television could have considerable significance in improving ESL students’ ability to comprehend and maintain the given information. It is argued that the multisensory characteristics of captioned television appeared to let bilingual learners to view the wordings in meaningful and motivating contexts.
Furthermore, Markham’s study (1989) focused on the effects of CCTV on the comprehension level of 67 university ESL students. The researcher exclaimed that beginning, intermediate and advanced students all obtained considerable comprehension gains when they watched TVE program with L2 subtitles.
Talaván (2007) states that subtitles can be used together with authentic video to develop word recognition and vocabulary acquisition skills in the EFL class. Neuman (1990) elaborated on 129 seventh and eighth graders in bilingual courses and inspected the value of subtitles in foreign language teaching by proposing four diverse modes. Nine sections of educational science videos were captioned in the learner’s native language. The findings of the research illustrated that the subjects viewing subtitled programs could gain new words from the second language more than those in other conditions. Thereafter, the findings supported the effect of CCTV on bilingual students’ process of language learning language, conceptual knowledge, and literacy.
Neuman and Koskinen (1992) believed that CCTV makes the language environment so rich that students could acquire new words incidentally inn the context. On the whole, their research supported the argue that CCTV that is a sort of a multisensory and entertaining medium supplies influential comprehensible input which affects ESL students’ vocabulary learning and reading. Koskinen, et al (1995) also conducted a study about the effects of employing captioned science episodes on the incidental vocabulary acquisition of 72 participants over a period of nine weeks. The participants’ word recognition was evaluated. Sentence anomaly and word meaning were also evaluated on the acquisition of the targeted vocabulary. The results of this study did not show any significant differences on the word recognition and sentence anomaly posttest; yet, the word-meaning test produced significant findings which supported CCTV.
Supporting the effectiveness of using L2 captions on vocabulary perception, Bean and Wilson (1989) stated that the adult non-native-speaker students demonstrated particularly positive positions toward captioning and increasing vocabulary. Learners who watched L2 captioned videos showed significant development in listening comprehension, reading comprehension, vocabulary acquisition and word recognition.
Parlato (1985) used captioned TV in class as a group had viewing activities that provided an ordinary frame of reference or talking point from which to make vocabulary and concepts. Learners watched the programs, searched for diversities between captions and dialogue, and talked about these diversities after the watching. This assisted them to progress their reading fluency and meta-linguistic knowledge on how language can be utilized and manipulated.
Based on the above mentioned outcomes, Goldman (1993) argued that CCTV inspired ESL learners of intermediate and advanced levels in reading comprehension. Furthermore, CCTV is an influential and active supplemental teaching aid (Goldman, 1996).
Markham and Peter (2003) investigated the effects of using Spanish captions, English captions, or no captions with a Spanish language soundtrack on intermediate university-level Spanish as a Foreign Language students’ listening/reading comprehension. A total of 213 intermediate (fourth semester) students participated as intact groups in the study. The passage material consisted of a DVD episode (seven minutes) presenting information concerning preparation for the Apollo 13 NASA space exploration mission. The students viewed only one of three passage treatment conditions: Spanish captions, English captions, or no captions. The Spanish language dependent measure consisted of a 20-item multiple-choice listening comprehension test. The results revealed that the English captions group performed at a considerably higher level than the Spanish captions group which in turn performed at a substantially higher level than the no captions group on the listening test.
Starting from the very beginning of the learning definitions, we reviewed the literature about learning success factors such as social or cultural ones coming to different theories of learning to consider the materials about effective techniques designed for highest rate of success in learning. To increase a better understanding of how effectively subtitles can develop learners’ ability reading, listening comprehension, and vocabulary recognition, it is necessary to know that language learning is a process of information cognition. We studied about human cognitive information processing. In reality, learning takes place successfully through either single or multiple channels. Therefore, the first part of this literature review discussed an inclusive theoretical framework to hold up the application of the captions for the process of second language learning. Cognitive information processing theory, dual coding theory, single channel and multiple-channel theory cue-summation theory, schema theory, and the comprehensible input theory were all discussed.
The second section presented an outline of closed captioning technology and a brief review of past research concerning captioned viewing. Although captions are planned for the hearing-impaired, the researchers focus on its advantages to the special-needs hearing as well. The abovementioned researches testifies that that the presence of L1 and L2 captioning has positive influences on hearing-impaired, disabled and ESL learners in terms of reading and listening comprehension. In studies relating to caption viewing, the results suggested that subtitled TV programs or videos have high impacts on language teaching and improvement of learners’ learning ability.
The next chapter will test the above findings in Iranian L2 learners and consider a model of effective teaching technique which identifies optimal caption characteristics for use in Iranian L2 teaching classes.
We will send an essay sample to you in 2 Hours. If you need help faster you can always use our custom writing service.Get help with my paper