From Quantum Computing to Computers of Generation Omega (an overview of the Fall 2020 class CS5354/CS4365)
Professor Vladik Kreinovich, UTEP CS
When: Friday, March 6th, 2020, 11:00 AM - 12:00 PM
Where: CCSB 1.0204
While modern computers are much faster than in the past, there are still many practical problems for which they are too slow. Since we have been unable to achieve a drastic speedup by using the traditionally used physical processes, a natural idea is to analyze whether using other physical processes can help. This analysis is the main topic of this class.
A natural idea is to find processes whose future behavior are computationally complex to predict. So, we will start by recalling the main definitions of computational complexity, such as worst-case time complexity, average time complexity, feasible algorithms, P and NP, and NP-hard problems.
Then we will analyze different physical phenomena from the viewpoint of their computational complexity. We start with probably the most realistic option -- quantum computing, and then move to the use of randomness in general, to the use of physicists' belief that every physical theory will eventually need to be modified, and to the use of physical processes with non-classical space-time models such as special and general relativity and the possibility of discrete space-time.
Past Seminar Archive
Multi-Modal User Interaction: Gesture + Speech using Augmented Reality Headsets
Francisco R. Ortega, Colorado State University
When: Friday, February 28th, 2020, 1:30 PM - 2:30 PM
Where: CCSB 1.0202
Multi-modal interaction, in particular gesture and speech, for augmented reality headsets is essential as this technology becomes the future of interactive computing. It is possible that in the near future, augmented reality glasses become pervasive and the preferred device. This talk will concentrate on the motivation behind gesture and speech user interaction, a recent study, and future work. The first part of the talk will describe a study where we demonstrated early and essential findings of gesture and speech user interaction. Findings include types of the gestures performed, the timing between gesturing and speech when used in multi-modality (130 milliseconds), workload (using NASA TLX), and a series of design guidelines resulting from this study. I will also describe the future direction of this research and collaborative multi-modal gesture interaction.
Dr. Francisco R. Ortega is an Assistant Professor at Colorado State University and Director of the natural user interaction lab (NUILAB). Dr. Ortega earned his Ph.D. in Computer Science (CS) in the field of Human-Computer Interaction (HCI) and 3D User Interfaces (3DUI) from Florida International University (FIU). He also held a position of Post-Doc and Visiting Assistant Professor at FIU between February 2015 to July 2018. Broadly speaking, his research has focused on gesture interaction, which includes gesture recognition and elicitation. His main research area focuses on improving user interaction by (a) eliciting (hand and full-body) gesture sets by user elicitation, and (b) developing interactive gesture-recognition algorithms. His secondary research aims to discover how to increase interest for CS in non-CS entry-level college students via virtual and augmented reality games. His research has resulted in multiple peer-reviewed publications in venues such as ACM ISS, ACM SUI, and IEEE 3DUI, among others. He is the first-author of Interaction Design for 3D User Interfaces: The World of Modern Input Devices for Research, Applications, and Game Development book by CRC Press. Dr. Ortega serves as Vertically Integrated Projects coordinator that promotes applied research for undergraduate students across disciplines.
Differentially Private Computation for Cyber Physical Systems
Sai Mounika Errapotu, UTEP ECE
When: Friday, February 21st, 2020, 11:00 AM - 12:00 PM
Where: BUSN 318
Cyber Physical System (CPS) have infiltrated into many areas such as aerospace, automobiles, chemical processing, civil infrastructure, energy, healthcare, transportation, entertainment, and consumer appliances due to their tight integration of computation and networking capabilities to monitor and control the underlying systems. Many domains of CPS such as smart metering, sensor/data aggregation, crowd sensing, traffic control etc., typically collect huge amounts of individual information for data analysis and decision making, therefore privacy is a serious concern in CPS. Most of the traditional approaches protect the privacy of individual’s data by employing trusted third parties or entities for data collection and computation. An important challenge in these large-scale distributed applications is how to protect the privacy of the participants during computation and decision making, especially when such third party entities are untrusted. This talk focuses on differential privacy based secure computation that guarantees individual privacy in presence of untrusted computing entities. Since confidential information must not be inappropriately released, and the use of untrusted information must not corrupt trusted computation and the utility, this talk discusses on privacy-accuracy tradeoffs of differentially private computation in some of the state-of-the-art applications by considering application-specific information security requirements.
Software Reliability Engineering: Algorithms and Tools
Vidhyashree Nagaraju, University of Massachusetts Dartmouth (Faculty candidate)
When: Friday, January 31st, 2020, 9:00 AM - 10:00 AM
Where: CCSB 1.0202
While there are many software reliability models, there are relatively few tools to automatically apply these models. Moreover, these tools are over two decades old and are difficult or impossible to configure on modern operating systems, even with a virtual machine. To overcome this technology gap, we are developing an open source software reliability tool for the software and system engineering community. A key challenge posed by such a project is the stability of the underlying model fitting algorithms, which must ensure that the parameter estimates of a model are indeed those that best characterize the data. If such model fitting is not achieved, users who lack knowledge of the underlying mathematics may inadvertently use inaccurate predictions. This is potentially dangerous if the model underestimates important measures such as the number of faults remaining or overestimates the mean time to failure (MTTF). To improve the robustness of the model fitting process, we have developed expectation conditional maximization (ECM) algorithms to compute the maximum likelihood estimates of nonhomogeneous Poisson process (NHPP) software reliability models. This talk will present an implicit ECM algorithm, which eliminates computationally intensive integration from the update rules of the ECM algorithm, thereby achieving a speedup of between 200 and 400 times that of explicit ECM algorithms. The enhanced performance and stability of these algorithms will ultimately benefit the software and system engineering communities that use the open source software reliability tool.
Vidhyashree Nagaraju is a PhD candidate in the Department of Electrical and Computer Engineering at the University of Massachusetts Dartmouth (UMassD), where she received her MS (2015) in Computer Engineering. She received her BE (2011) in Electronics and Communication Engineering from Visvesvaraya Technological University in India.
Differential Debugging for Side Channel Vulnerabilities
Saeid Tizpaz Niari, University of Colorado (Faculty Candidate)
When: Monday, January 27th, 2020, 4:00 PM - 5:00 PM
Where: Prospect Hall, room 324
In early 2018, Meltdown and Spectre attacks challenged the security of any computer devices globally. These attacks exploit timing information to compromise the confidential information of users. While most existing debugging techniques provide supports for the functional correctness, the support for non-functional properties such as information leaks via timing observations is scarce. In this talk, Tizpaz-Niari will showcase a range of tools and techniques to detect, explain, and mitigate side-channel vulnerabilities in large-scale libraries and web applications. The technique combines tools from gray-box fuzzing, dynamic program analysis, and machine learning inferences. The talk also presents a novel technique to adapt neural network models for quantifying the amounts of information leaks.
Saeid Tizpaz-Niari is currently a PhD Candidate in the ECEE department at the University of Colorado Boulder. His research interests are at the intersection of Software Security, Machine Learning, and Verification. He is the first author of multiple publications in top tier AI, Security, and Verification conferences. In 2018, he received the Gold Research Award from the ECEE department at CU Boulder. In addition, he won second prize for his submission to the First Microsoft Open Source Challenge.
Learning Interpretable Features by Tensor Decomposition
Shah Muhammad Hamdi, Georgia State University (faculty candidate)
When: Wednesday, January 22nd, 2020, 4:00 PM - 5:00 PM
Where: Classroom Building, room C205
Representation learning of the nodes in a graph has facilitated many downstream machine learning applications such as classification, clustering, and visualization. Existing algorithms generate less interpretable feature space for the nodes, where the roles of the features are not understandable. This talk covers the use of multi-dimensional arrays or tensors in node embedding. I will explain how tensor decomposition-based node embedding algorithms consider local and global structural similarities of the nodes, learn the proximity itself, require less number of tunable hyperparameters, and generate a feature space where the feature roles are understandable, while working on different types of static networks. In addition to the social networks, I will show another application in the neuroscience domain, more specifically on the brain networks found from the resting-state fMRI data of healthy and disabled subjects, where nodes represent brain regions, and edges represent functional correlation among them. I will discuss the use of tensor decomposition in the representation learning of the biomarkers of the neurological diseases, which are the discriminative nodes and edges of the brain networks that can distinguish the healthy population from the disabled population. I will demonstrate some experimental findings on social networks and brain networks, and the potentials of this approach in one research problem of solar physics, which is the multi-variate time series-based solar flare prediction.
Shah Muhammad Hamdi is a PhD candidate in the Department of Computer Science of Georgia State University. His research interests are machine learning, data mining and deep learning, more specifically, finding interesting patterns from real-life graphs and time-series data. His research finds applications in the fields of social networks, neuroscience, and solar physics. He has publications in top data mining conferences such as IEEE ICDM, ACM CIKM, and IEEE Big Data. He worked as a data scientist intern at Amazon Web Services Inc. (AWS) and LexisNexis Risk Solutions. Before starting his PhD, he worked as a Lecturer in Computer Science in Northern University Bangladesh, Dhaka, Bangladesh. He received his Bachelor's degree in Computer Science in 2014 from Rajshahi University of Engineering and Technology (RUET), Rajshahi, Bangladesh.
Research Advances and Opportunities in Scalable High Performance Computing Systems and Applications
Dr. Shirley Moore, ORNL
When: Friday, January 17th, 11:00-12:30
Where: CCSB 1.0702
High performance computing (HPC) systems are being transformed from relatively self-contained homogeneous multiprocessor systems to large-scale distributed and networked heterogeneous systems. To address the approaching end of Moore’s Law and Dennard scaling, HPC systems are increasingly incorporating specialized accelerators and new memory and communication technologies. The use of HPC systems is expanding beyond traditional scientific simulation applications to end-to-end coupled workflows that integrate machine learning and artificial intelligence. This talk will discuss recent work in evaluating future technologies, including processor-in-memory (PIM), field programmable gate arrays (FPGAs), and quantum computers. We will also discuss recent efforts and future opportunities in integrating edge computing and machine learning into HPC systems and applications.
Shirley Moore is a Senior Computer Scientist in the Future Technologies Group in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL). Her research interests are in performance evaluation and modeling of emerging hardware and software technologies. She leads the ORNL efforts in several areas of the Department of Energy Exascale Computing Project, including Hardware Evaluation, Proxy Applications, and Application Assessment. She is a Co-PI or senior personnel on a number of research projects, including three quantum computing projects. She has also mentored several high school, undergraduate and graduate student interns while at ORNL.
Workflow Engines: Benefits and Challenges
Logan Chadderdon, Google and UTEP
When: 11 AM - 12 PM Friday, November 15th, 2019
Where: Business room 318
Workflow engines can help drive efficiency and correctness when dealing with complex interactions between systems and/or people. They are complex and multi-faceted, and often under-utilized, but they can serve as a critical piece of a system's architecture. This talk will cover what workflow engines are at a high level, what benefits they can bring in certain scenarios, and what challenges arise when designing, implementing, or using a workflow engine at scale.
Logan Chadderdon (firstname.lastname@example.org) is a software engineer at Google. He graduated from the University of Arizona, and then went to work on an internal workflow engine (frontend, backend, and related tooling) at Google for five years and counting in Mountain View, CA. He has a passion for building great products and learning/teaching technology. During the Fall 2019 semester he is teaching a CS1 course at UTEP as a Googler in Residence.
Leveraging HPC for Research and Education
Jerry Perez, Ph.D., UT Dallas
When: 2 PM - 3 PM Friday, November 8th, 2019
Where: CCSB 1.0202
High Performance Computing (HPC) enhances research and funding opportunities at universities that leverage it. HPC can create opportunities for learning and facilitating education improving the state of the art in STEM classrooms across the university campus and beyond. In this talk, we will explore examples of HPC that may provide increased scholarship and funding at the university and answer the following questions: How can HPC be integrated with my class? What are some examples? How can HPC increase research funding? How can research funding increase HPC? Where can I find HPC computing resources beyond my campus?
Jerry Perez holds a Ph.D. in Information Systems from the Nova Southeastern University and an M.B.A. from the Wayland Baptist University. He is Director of Cyber-Infrastructure Operations and High Performance Computing at the Office of Information Technology at UT Dallas. He was Adjunct Professor of Practice and Texas Tech University and the Wayland Baptist University, Senior Research Associate of High Performance Computing, Texas Tech University and a Computer Architecture Consultant. His research interests include Information Systems design and deployment, Supercomputing systems design and deployment, Quantum Information Systems, Information Technology management, Big Data Analytics for security, and Internet of Things and IoT security.
Amy Wagler (email@example.com) or Natalia Villanueva Rosales (firstname.lastname@example.org).
Evaluating Usability of Permissioned Blockchain for Internet-of-Battlefield Things
Abel Gomez, UTEP CS, PhD. Program
When: 11:00 AM - 12:00 PM Friday, October 18
Where: CCSB 1.0202
Military technology is ever-evolving to increase the safety and security of soldiers on the field while integrating Internet-of-Things solutions to improve operational efficiency in mission oriented tasks in the battlefield. Centralized communication technology is the traditional network model used setup for battlefields and is vulnerable to denial of service attacks, therefore suffers performance hazards. They also lead to a central point of failure, due to which, a flexible model that is mobile, resilient, and effective for different scenarios must be proposed. Blockchain is a customizable platform that allows multiple nodes to update a distributed ledger. The decentralized nature of the system suggests that it can be an effective tool for battlefields in securing data communication among Internet-of-Battlefield Things (IoBT). In this paper, we integrate a permissioned blockchain, namely Hyperledger Sawtooth, in IoBT context and evaluate its performance with a goal of determining whether it is has the potential to serve the performance needs of IoBT environment. Using different testing parameters, the metric data would help in suggesting the best parameter set, network configuration and blockchain usability views in IoBT context. It is found that blockchain integrated IoBT platform has heavy dependency on the characteristics of the underlying network such as topology, link bandwidth, jitter, etc., which can be tuned up to achieve optimal performance.
Myths and Misconceptions about Using Social Media Data for Health Research
Dr. Graciela Gonzalez, University of Pennslyvania
When: 2pm - 3pm, Friday, October 11,2019
Where: CCSB 1.0202
The total number of users of social media continues to grow worldwide, resulting in the generation of vast amounts of raw data direct from consumers. Popular social networking sites such as Facebook, Twitter and Instagram dominate this sphere. According to estimates, 500 million tweets and 4.3 billion Facebook messages are posted every day. A Pew Research Report on Social Media estimates that nearly half of adults worldwide and two-thirds of all American adults (65%) use social networking. The report states that of the total users, 26% have discussed health information, and, of those, 30% changed behavior based on this information and 42% discussed current medical conditions. Advances in automated data processing, machine learning and Natural Language Processing present the possibility of utilizing this massive data source for biomedical and public health applications, if researchers adequately address the methodological challenges unique to this media. Despite numerous published studies, however, myths and misconceptions persist about the suitability and adequate use of these data, impacting the perception of researchers, institutional review boards, and the general public on the validity of the studies for health research. In this talk, we will discuss (and hopefully debunk!) some of the more poignant myths and misconceptions, based on close to 10 years and 25 publications on the subject.*
Dr. Gonzalez Hernandez is a recognized expert and leader in natural language processing (NLP) applied to bioinformatics, medical/clinical informatics, and public-health informatics. She is an Associate Professor of Informatics in Biostatistics and Epidemiology at the University of Pennsylvania where she established the Health Language Processing Lab within the Institute of Biomedical Informatics. Her recent work focuses on NLP applications for public-health monitoring and surveillance and is funded by R01 grants from the National Library of Medicine and the National Institute of Allergy and Infectious Diseases. Her work on social media mining for pharmacovigilance has resulted in 25 publications in prestigious conferences and journals. Her work on enriching geospatial information for phylogeography uses NLP for the automatic extraction of relevant geospatial data from the literature and for linkage to GenBank records.
Contact: Natalia Villanueva Rosales (Computer Science), email@example.com
*This work was funded by the National Institutes of Health (NIH) National Library of Medicine (NLM) grant number R01LM011176. The content is solely the responsibility of the authors and does not necessarily represent the views of the NIH or NLM.
Prosody Research and Applications: The State of the Art
Nigel Ward, Ph.D.
When: 11:00 AM - 12:00 PM Friday, September 6
Where: CCSB 1.0202
Prosody is the musical aspects of speech: beyond the words said, the properties of pitch, loudness, timing and so on. Prosody is essential in human interaction and relevant to every area of speech science and technology. This talk will be a survey for non-specialists of current developments. It will illustrate the issues and advances using recent findings about the prosodic constructions of English, describe ways to exploit prosody for applications including speech recognition, speech synthesis, dialog systems, and the inference of speaker states and traits, and finally discuss remaining challenges.
This talk will also be presented later next month as a Survey Presentation at Interspeech 2019 in Graz, Austria.
Persistent Threats, Active Defense: Cybersecurity Practices Today
Dr. Anthony Caldwell, Pramerica Ireland
When:Friday May.10. 2019 @ 10 AM
Where: CCSB 1.0202
Ethical hacking or more broadly, information security, is a dynamic field in which the pace of change appears to leave the cybersecurity professional in a position where they are playing catch-up. Over the last ten years, persistent vulnerabilities and security related issues have plagued many industries and have forced cultural changes oriented around cybersecurity within organizations across the world. Within such an energetic working environment, this talk will illuminate some of the issues encountered in the field and how they have been dealt with from the practitioner perspective. An overview of issues associated with the detection and remediation of vulnerabilities such as cross site scripting (XSS), business email compromise and clickjacking. Also discussed are key security principles implemented in the industrial context and the broader area of threat perception. How might end user and technical training be carried out and how adaptations to our test methodologies are needed in order to provide the best service possible.
Dr. Anthony Caldwell is a cybersecurity engineer for the DevSecOps service in Pramerica Ireland where he specializes in dynamic application security testing. He is a member of OWASP, a certified ethical hacker (CEH), a System Security Certified Practitioner (SSCP) and a founder member of the cybersecurity services offered by Pramerica since its inception in 2010. Given the prevalence and frequency of attacks perpetrated by threat agents across the globe, Anthony helped to transform the way Prudential understands and deals with information security issues, creating many of the techniques and processes used within the organization today. Combined with carrying out security testing he has conducted numerous in-house and external talks to wide variety of audiences ranging from academic to non-technical and has published twelve professional articles in areas such as information security, ethical hacking and digital forensics. Dr. Caldwell joined Pramerica in 2001 as a QA engineer working in mainframe technology and progressing to client-server across numerous business units in Prudential. Prior to joining Prudential in 2001, Dr. Caldwell began his career working for Intel as a device engineer where he tested the Pentium processor, then AOL Time Warner as a beta tester for their broadband service.
Dr. Caldwell holds a MSc in atomic physics, is a member of the Institute of Physics and has carried out PhD research in the fields of information systems research and science education. In the field of information systems research, Dr. Caldwell focused on the application of the Technology Acceptance Model in order to establish end users’ intentions towards the usage of an online learning platform. His work in the area of science education applied and extended Third-Space Theory within the context of small independent companies that demonstrate scientific principles to schools, museums and science festivals in the UK, Northern Ireland and Republic of Ireland. He is also a part-time lecturer and tutor with Dublin City University, Queen's University Belfast, Letterkenny Institute of Technology and is a volunteer with the Donegal Youth Service as a mathematics tutor for the underprivileged.
UTEP host: Somdev Chatterjee; Talk host: Dr. Badreddin
Vidi Opus: A Startup to Revolutionize Agriculture with Innovative Technologies
When: Friday, April .26. 2019 @ 10AM
Where: Location Business 312
Mr. Moya will speak about some of the key challenges in tracing food and related products as they move through supply chains, starting with the farmer/rancher and finishing at the consumer. He will also speak about the complexity of food supply system, and provided an overview of key challenges and their manifestation in food recalls. He will finish by talking about Vidi Opus and CattleCast startups that aim to address some of the aforementioned challenges.
Jonas Moya is an entrepreneur from Tucumcari, New Mexico and the founder of Vidi Opus. Jonas has held many roles in the industry, ranging from a cattle rancher, dryland farmer, livestock deputy inspector, and agricultural researcher. Through these different roles, Jonas has been able to identify many trends and challenges that are consistent with the multiple industries that make up agriculture. That is when he founded Vidi Opus. Vidi Opus is a tech company looking to creating new disruptive technologies designed to meet the needs of agricultural industries.
The Future of Robotic Exploration of the Universe: A Systems Engineering Perspective
Dr. Maged Elaasar, JPL (NASA, Caltech)
Friday, March 29, Time 10-11AM
Location CCSB G.0208
The Jet Propulsion Laboratory's (JPL) mission is to explore the universe with the aid of advanced robotic systems. These missions necessitate advanced systems engineering methods to achieve its goals safely, reliably, efficiently, and systematically. At JPL, we operate at the cutting edge of systems engineering at all levels of the mission
In this talk, Dr. Elaasar will articulate some desirable characteristics of a modern systems engineering practice. He will present architectural principles that adhering to can enhance the prospects of improving those characteristics. He will then present recent work that aims at realizing those architectural principles through a software system called Open CAESAR, which is being used by various space projects at JPL. Open CAESAR employs techniques from
Dr. Maged Elaasar is a Senior Software Systems Architect at NASA’s Jet Propulsion Laboratory (JPL) at the California Institute of Technology (CalTech). He leads a JPL wide strategic R&D program named Integrated
Time to Gather Stones
Dr. Vladik Kreinovich
Friday, March 15th 10:00- 11:00 AM
Business Building Room 302
TIME TO GATHER STONES. Many heuristic methods have been developed in intelligent computing. Researchers have proposed many new exciting ideas. Some of them work well, some don’t work so well. And promising techniques — that work well — often benefit from trial-and-error tuning. It is great to know and use all these techniques, but it is also time to analyze why some technique work well and some don’t. Following the Biblical analogy, we have gone through the time when we cast away stones in all directions, when we developed numerous seemingly unrelated ideas. It is now time to gather stones, time to try to find the common patterns behind the successful ideas. Hopefully, in the future, this analysis will help to replace the time-consuming trial-and-error optimization with more efficient techniques.
CASE STUDIES. In this class, we will mainly concentrate on three classes of empirically successful semi-heuristic methods that do not yet have a full theoretical explanation:
* fuzzy techniques, techniques for translating expert knowledge described in terms of imprecise (“fuzzy”) natural-language words like “small” into precise numerical strategies;
* neural networks (in particular, deep neural networks), techniques for learning a dependence from examples; and
* quantum computing, techniques that use quantum effects to make computations faster and more reliable.
Toward fluent collaboration in human-robot teams
Dr. Tariq Iqbal, MIT
February 11, 4:30-5:30
Classroom Building room 305
Robots currently have the capacity to help people in several fields, including health care, assisted living, and manufacturing, where the robots must share physical space and actively interact with people in teams. The performance of these teams depends upon how fluently all team members can jointly perform their tasks. In order to successfully act within a group, a robot requires the ability to monitor other members' actions, model interaction dynamics, anticipate future actions, and adapt its own plans accordingly. To achieve that, I develop human-team inspired algorithms for robots to fluently coordinate and collaborate with people in complex, real-world environments by modeling how people interact among themselves in teams and by utilizing that knowledge to inform robots' actions.
In this talk, I will present algorithms to measure the degree of coordination in groups and approaches to extend these understandings by robots to enable fluent collaboration with people. I will first describe a non-linear method to measure group coordination, which takes multiple types of discrete, task-level events into consideration. Building on this method, I will present two anticipation algorithms to predict the timings of future actions in teams. Finally, I will describe a fast online activity segmentation algorithm which enables fluent human-robot collaboration.
Tariq Iqbal is a postdoctoral associate in the Interactive Robotics Group at MIT. He received his Ph.D. from the University of California San Diego, where he was a member of the Contextual Robotics Institute and the Healthcare Robotics Lab. His research focuses on developing algorithms for robots to solve problems in complex, real-world environments, which enable robots to perceive, anticipate, adapt, and fluently collaborate with people in teams.
Toward building an automated bioinformatician: Parameter advising for improved scientific discovery
Dr. Dan DeBlasio, Carnegie-Mellon University
Thursday, Feb 14, 4:30-5:30 Classroom Building, Room C305
Modern scientific software has a large number of tunable parameters that need to be adjusted to ensure computational performance and accuracy of the results. When these parameter choices are made incorrectly we may overlook significant results or falsely report insignificant ones. Optimizing the parameter choices for one input may not provide an assignment that's good for another, so this parameter optimization process typically needs to be repeated for each new piece of data. Standard machine learning methods for solving this problem need to repeatedly run the software which may not be suitable in practice. Because of the time consumption required to optimize parameters and the possible loss of accuracy that can result when chosen incorrectly, the default parameter vector that are provided by the tool developer is often used. These defaults are designed to work well on average, but most interesting cases are rarely “average”.
In this talk, I will describe my first steps in automatically learning the correct program configuration for biological applications using a framework we call “Parameter Advising”. To apply this framework to the problem of multiple sequence alignment we developed an accuracy estimator, called Facet, to help choose alignments since no ground truth is available in practice. When we use Facet for advising on the Opal aligner we boost accuracy by 14.6% on the hardest-to-align benchmarks. For the reference-based transcript assembly problem, when applying parameter advising to the Scallop assembler we see an increase in accuracy of 28.9%. The framework is general and can be extended to other problems in computational biology and beyond. I will discuss possible areas where parameter advising could be used to automatically learn to run complex analysis software
Dan DeBlasio is currently a Lane Fellow of the Computational Biology Department in the School of Computer Science at Carnegie Mellon University where he works in Carl Kingsford’s group.
He received his PhD in Computer Science from the University of Arizona in 2016 under John Kececioglu. He holds an MS and BS in Computer Science from the University of Central Florida
working with Shaojie Zhang. He recently published a book on his work titled “Parameter Advising for Multiple Sequence Alignment”. Dan also recently finished a two year appointment to
the Board of Directors of the International Society for Computational Biology and is an advisor to the ISCB Student Council where he has held several roles.
Relativistic Effects Can Be Used to Achieve a Universal Square-Root (Or Even Faster) Computation Speedup
Dr. Vladik Kreinovich
Friday, February 1, 10:00am - 11:00am
In this talk, we show that special relativity phenomenon can be used to reduce computation time of any algorithm from T to square root of T. For this purpose, we keep computers where they are, but the whole civilization starts moving around the computer – at an increasing speed, reaching speeds close to the speed of light. A similar square-root speedup can be achieved if we place ourselves near a growing black hole. Combining the two schemes can lead to an even faster speedup: from time T to the 4-th order root of T.
Artificial Intelligence Approaches for Wickedly Hard National Security Problems
Dr. David Tauritz
Monday, February 4, 4:30-5:30
Many national security problems are wickedly hard in that they map to computational problem classes which are intractable. This seminar aims to illuminate how artificial intelligence approaches can be created to address these problems and produce useful solutions. In particular, two promising approaches will be discussed, namely (I) computational game theory employing coevolutionary algorithms for identifying high-consequence adversarial strategies and corresponding defense strategies, and (II) hyper-heuristics employing evolutionary computation for the automated design of algorithms tailored for high-performance on targeted problem classes.
The first approach will be illustrated with the Coevolving Attacker and Defender Strategies for Large Infrastructure Networks (CEADS-LIN) project funded by Los Alamos National Laboratory (LANL) via the LANL/S&T Cyber Security Sciences Institute (CSSI) [https://web.mst.edu/~tauritzd/CSSI/]. This project focuses on coevolving attacker & defender strategies for enterprise computer networks. A proof of concept for operationalizing cyber security R&D from this project demonstrated in simulation that coevolution is capable of implementing a computational game theory solution for adversarial models of network security. Currently a high-fidelity emulation framework with intelligent attacker and defender agents is being developed with as end goal to provide a fully automated solution for identifying high-impact attacks and corresponding defenses.
The second approach will be illustrated with the Scalable Automated Tailoring of SAT Solvers project funded by Sandia National Laboratories with supplemental funding from the Computer Research Association’s Committee on the Status of Women in Computing Research (CRA-W), and with the Network Algorithm Generating Application (NAGA) project funded via CSSI. These projects show how hyper-heuristics can be employed to create algorithms targeting arbitrary but specific problem classes for repeated problem solving where high a priori computation costs can be amortized over many problem class instances.
Daniel Tauritz is an Associate Professor & Associate Chair in the Department of Computer Science at the Missouri University of Science and Technology (S&T), a University Contract Scientist for Sandia National Laboratories, a University Collaboration Scientist at Los Alamos National Laboratory (LANL), the founding director of S&T's Natural Computation Laboratory, and founding academic director of the LANL/S&T Cyber Security Sciences Institute. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. His research interests focus on artificial intelligence approaches to complex real-world problem solving with an emphasis on national security problems in areas such as cyber security, cyber physical systems, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.