Faculty of Graduate StudiesThis community holds theses and dissertations submitted to the Faculty of Graduate Studieshttps://dspace.library.uvic.ca//handle/1828/112017-11-22T03:01:16Z2017-11-22T03:01:16ZCognitive control operations involved in switching tasks, and deficits associated with aging and Parkinson's diseaseWoodward, Todd Stephenhttps://dspace.library.uvic.ca//handle/1828/88082017-11-20T21:45:15Z2017-11-20T00:00:00ZCognitive control operations involved in switching tasks, and deficits associated with aging and Parkinson's disease
Woodward, Todd Stephen
The purpose of this investigation was to identify the cognitive control operations involved in task switching, and to apply this understanding to a theoretical account of the qualitatively different task-switching deficits associated with aging versus Parkinson's disease (PD). Participants in young (N = 33), elderly (N = 34) and PD ( N = 34) samples switched between color naming and word reading in response to incongruent, neutral, or congruent Stroop stimuli and vocal response time (RT) was recorded. The results suggested that executive processes involved in switching selective attention between object attributes determined a substantial portion of task-switching RT costs. More specifically, these component control processes were identified as: (a) shifting selective attention from the stimulus dimension just attended to on the previous response to the now-relevant stimulus dimension (SHIFT), and (b) a preventative operation characterized by the partial inhibition of selective attention to the now-relevant stimulus dimension, carried out when the probability is high that the now-relevant dimension must be ignored on a future response (MODERATE). A multilayer, linear, parallel distributed processing (PDP) model was presented to demonstrate how these cognitive processes may be implemented by the cognitive system, and how these findings relate to the executive function concepts of the Supervisory Attentional System (SAS) and Contention Scheduling (CS). In addition, a cost associated with responding to the first member of a stimulus pair or triplet was also identified (FIRST); however, this operation appeared to function independently from the executive control operations involved in switching tasks (i.e., FIRST was also present for task repetition trials). Finally, a number of two-way interactions between these three main effects (SHIFT, MODERATE and FIRST) accounted for unique variance in task-switching RTs, such that RT was increased when these effects co-occurred. In the neuropsychological investigation it was demonstrated that the SHIFT and MODERATE effects were significantly greater for an elderly sample compared to a young sample, resulting in an increase in task-switching RT. This deficit was attributed to an inefficient shifts of selective attention. Conversely, PD did not necessarily affect the SHIFT and MODERATE operations, when compared to age-matched controls; however, the disease was associated with difficulty overcoming Stroop interference while switching tasks. This deficit was interpreted as affecting the SHIFT operation under the most taxing conditions, attributable to a central resource deficit in PD. In contrast, no between-group differences on the effect FIRST were observed.
2017-11-20T00:00:00ZThe efficacy of improving fundamental learning and its subsequent effects on recall, application and retentionWong, Williamhttps://dspace.library.uvic.ca//handle/1828/88072017-11-20T21:16:49Z2017-11-20T00:00:00ZThe efficacy of improving fundamental learning and its subsequent effects on recall, application and retention
Wong, William
In post-secondary introductory courses there is a knowledge base that must be learned before proceeding to advance study. One method to learn such fundamental material has been the mastery paradigm (Bloom, 1956). Using this approach, students learn a particular knowledge unit until they achieve a predetermined accuracy criterion, for example, 90% correct, on a post-learning test. Lindsley (1972) broadened the definition of mastery learning to include response rate (i.e., responses per minute) and called it ‘fluency’. The response rate has not generally been considered in the traditional demonstration of mastery within the academic setting.
Empirical research to date has focused solely on the effects of either approach without any direct comparisons. There was only one published report comparing the effects between the two approaches (Kelly, 1996). In the present study, two single-subject experiments were conducted using a computer program called Think Fast to deliver factual information covering introductory behavioral psychology concepts.
In Experiment 1, a within-subject design was used to control the number of learning trials, instructional set, and the experimental presentation sequence (n = 9). This design consisted of multiple learning units and instructions. Group, subgroup and individual descriptive analyses revealed that posttest achievement was higher for items learned to both Accuracy and Speed than Accuracy. In analyzing the change in retention from immediate recall to scores obtained after a 30-day absence, learning was more resistant to extinction for concepts that had previously been learned to Accuracy and Speed rather than Reading or Accuracy.
Furthermore, retention decreases were examined statistically and there was one significant result in Session 1 and two in Session 2. In Session 1, under the Accuracy condition, subjects recalled 25.5% fewer items after a 30-day absence, t(8) = 5.33, p < .01. A decrease of 12.2% for posttest items learned under the Accuracy and Speed condition was not significant, t(8) = 2.05, p > .05. In Session 2, significantly fewer (Recall 2) posttest items were remembered after a 30-day absence for both experimental conditions, t(8) = 5.08, p < .01 (Accuracy) and t(8) = 3.82, p < .01 (Accuracy and Speed). All other group retention comparisons were not statistically significant.
In Experiment 2, a between-subject design was used to replicate the effects of Experiment 1, but this time each subject received only one set of instructions (n = 6). The effects of this simplified research design resulted in no significant differences between learning to both Accuracy and Speed in comparison to Accuracy. Other factors that affected learning included subjects' baseline ability and the extent of their interest in the study. These factors determined whether or not subjects followed the learning instructions and, to varying degrees, affected their subsequent posttest performance. The study concluded with educational implications and suggestions for further research.
2017-11-20T00:00:00ZSystematic parameterized complexity analysis in computational phonologyWareham, Haroldhttps://dspace.library.uvic.ca//handle/1828/88062017-11-20T19:37:14Z2017-11-20T00:00:00ZSystematic parameterized complexity analysis in computational phonology
Wareham, Harold
Many computational problems are NP-hard and hence probably do not have fast, i.e., polynomial time, algorithms. Such problems may yet have non-polynomial time algorithms, and the non-polynomial time complexities of these algorithm will be functions of particular aspects of that problem, i.e., the algorithm's running time is upper bounded by f (k) |x|ᶜ, where f is an arbitrary function, |x| is the size of the input x to the algorithm, k is an aspect of the problem, and c is a constant independent of |x| and k. Given such algorithms, it may still be possible to obtain optimal solutions for large instances of NP-hard problems for which the appropriate aspects are of small size or value. Questions about the existence of such algorithms are most naturally addressed within the theory of parameterized computational complexity developed by Downey and Fellows.
This thesis considers the merits of a systematic parameterized complexity analysis in which results are derived relative to all subsets of a specified set of aspects of a given NP-hard problem. This set of results defines an “intractability map” that shows relative to which sets of aspects algorithms whose non-polynomial time complexities are purely functions of those aspects do and do not exist for that problem. Such maps are useful not only for delimiting the set of possible algorithms for an NP-hard problem but also for highlighting those aspects that are responsible for this NP-hardness.
These points will be illustrated by systematic parameterized complexity analyses of problems associated with five theories of phonological processing in natural languages—namely, Simplified Segmental Grammars, finite-state transducer based rule systems, the KIMMO system, Declarative Phonology, and Optimality Theory. The aspects studied in these analyses broadly characterize the representations and mechanisms used by these theories. These analyses suggest that the computational complexity of phonological processing depends not on such details as whether a theory uses rules or constraints or has one, two, or many levels of representation but rather on the structure of the representation-relations encoded in individual mechanisms and the internal structure of the representations.
2017-11-20T00:00:00ZA quantitative concurrent engineering design method using virtual prototyping-based global optimization and its application in transportation fuel cellsWang, Gaofeng Garyhttps://dspace.library.uvic.ca//handle/1828/88042017-11-17T20:15:56Z2017-11-17T00:00:00ZA quantitative concurrent engineering design method using virtual prototyping-based global optimization and its application in transportation fuel cells
Wang, Gaofeng Gary
Concurrent engineering and virtual prototyping are two emerging techniques that are bringing considerable economical benefits to the manufacturing industry. This work proposes the use of virtual prototyping to produce quantitative measures of product lifecycle performances to facilitate the implementation of concurrent engineering. A multiobjective, virtual prototyping-based global optimization problem is formulated to close the open loop of present virtual prototyping methods and to allow concurrent engineering design to be carried out systematically and automatically.
Virtual prototyping-based design optimization faces several technical challenges. First, virtual prototyping is usually computationally intensive; relations between design variables and product life-cycle performances are often implicit. Secondly, the optimization problem usually consists of multi-modal design (objective and constraint) functions. The complexity and multi-modal nature of the optimization problem preclude the direct use of conventional local and global optimization methods. In this work, a new and efficient search method for virtual prototyping-based global design optimization is introduced. The method, called Adaptive Response Surface Method (ARSM), carries out systematic “design experiments” through virtual prototyping to build second-order regression models to approximate the design functions. Through an iterative process, the regression models are improved and the global design optimum is obtained. The ARSM search scheme requires only a modest number of design function evaluations, making virtual prototyping-based global design optimization feasible.
The proposed quantitative concurrent design method is then applied to the components, stack and system design of a transportation fuel cell. The approach led to an optimized multi-functional component, a reduction of the system cost, and an improvement of the system performance. The approach can be applied to the concurrent design and design optimization of other complex mechanical components, assemblies and systems.
2017-11-17T00:00:00Z