FCS News

» Go to news main

NSERC Engage Grant Success for the Faculty of Computer Science

Each year, Natural Sciences and Engineering Research Council of Canada (NSERC) awards 200 Engage Grants across Canada. One of their most popular funding programs, these grants offer $25,000 for a maximum period of six months, with an overall intention to foster the development of new research partnerships.

NSERC Engage Grant are designed to give innovative companies that operate from a Canadian base access to the unique knowledge, expertise, and capabilities available at Canadian universities and colleges.

Industry partners gain access to the unique knowledge and expertise of researchers in order to solve problems relevant to their organization. Researchers apply their knowledge to real-world problems while staying abreast with current technologies used within the private sector.

A success for Dalhousie’s Faculty of Computer Science

Dr. Evangelos Milios, Associate-Dean Research, facilitated the submission of ten applications from the Faculty of Computer Science. All ten of these applications were awarded, giving the Faculty $250,000 of funding over the next six months. The bulk of this funding supports graduate students and postdocs, with a small portion sometimes covering specialized equipment or travel to meet with the industry partner.

The Faculty of Computer Science NSERC Engage Grant success accounts for 5% of the national total awarded.

A look at the upcoming projects

NSERC Engage Grants provide students and postdocs exposure to problems with a practical significance, which often becomes part of their thesis work – and connections with industry that can lead to future employment. 

Dirk Arnold & BlueLight Analytics

BlueLight Analytics specializes in the measurement of curing light energy delivery in dental practice. Their technology allows the accurate measurement of light energy delivered by dental curing lights, but the cost of that technology is such that rather than being used continually by dentists, dental practitioners rely on measurements done periodically by external consultants. Dental curing lights can deterioriate significantly between measurements, resulting in inadequate curing times and thus premature restoration failures. 

This project will tackle the development of techniques for monitoring the effectiveness of dental curing lights based on low-quality images captured through ordinary cell phone cameras. This will enable dentists to continually monitor their curing lights at a low cost and to thus detect deterioriation in curing lights that would negatively affect restorations. While cell phone cameras are inadequate for the accurate measurement of the light energy delivered to dental materials, it is expected that they are sufficient for the detection of significant changes in the output of individual curing lights over time.

Nur Zincir-Heywood & 2Keys

Keeping the modern computer systems and networks secure is a time-consuming and knowledge intensive process. Many open-source and commercial off-the-shelf systems are available to monitor and detect malicious behaviours, but security analysts still have to spend hours differentiating between false alarms and real attacks.

In order to address these challenges for 2Keys, NIMS Lab plans to investigate and identify the predictors involved in malicious activities based on the application data sources of these security systems. 

Derek Reilly & QRA

QRA has a mission to develop software tools for the analysis and verification of system requirements in complex cyber-physical systems like those used in safety critical industries like aerospace, automotive, medical technology, power generation, and defence systems.

QRA’s primary product is a model checker “SAT solver” for engineering designs, allowing engineers to validate a model against formal requirements prior to proceeding with implementation. Currently, the model checker returns results in a raw variable-based textual format that does not reflect the natural graphical orientation of the models that the engineers are designing.

GEM Lab will consider how to visualize the model-checking behaviour and resultant violations in a manner that communicates algorithmic processes while remaining connected to the model. Engineers would see a visualization of the model checking process linked to an interactive model of the cyber-physical system, and as the current focus moves to a subcomponent they modified a violation may be reported. The engineer would then use the software to export a set of inputs to the suspect subcomponent that recreates its state at the time of the violation. Over time, the engineer may begin to notice patterns emerging across violations, those characteristic of an overflow condition on a floating point output for example, facilitating debugging and verification. 

Jeannette Janssen & Scotiabank

In an effort to maximize benefit to both the company and customer, Scotiabank engages with its customers by highlighting existing projects of high potential benefit and by offering new products.

This marketing effort is based on an extensive quantitative analysis, performed by the company’s department of Decision Sciences. An integral part of this analysis is a complex optimization problem, which Scotiabank is looking to solve more efficiently. This project aims to develop a flexible Optimization Solution Tool which is adaptable to new constraints that may arise in the future. This tool should incorporate new ideas that allow for a better use of the solution to aid in decision-making. 

Evangelos Milios & Proximify

Proximify is a web services company that provides supporting software for the creation of private collaboration networks. Connections are identified among members of the network on the basis of shared interests, skills, and experience. It also supports different views of the organization - including maps, publications and intelligence clusters - combined with filters to customize the views to the user's requirements.

 

Currently, each member is required to manually select areas of expertise, or define their own, making the extraction and representation of this information a challenge.

This project will investigate novel representations of expertise that are extracted automatically from the publications or reports a member has authored. These representations are semantic in that they capture the meaning of the publications and are robust in the presence of vocabulary mismatch between member expertise profiles. The semantic representations will support the important functionality of matchmaking between industry and faculty members, based on a free text description of the company's interests, and interactive visualizations of expertise landscapes of units and institutions.

Jamie Blustein & Proximify

Organizations with a multilingual online presence face challenges associated with streamlining the translation process and complying with W3C's accessibility initiatives. Existing content management systems (CMS) are not designed for creating multilingual websites. Often, the only solution is to build a separate website for every language, which can double or triple the effort required to maintain a website.

Proximify's GLOT CMS integrates features for translation workflow and accessibility semantics. The main open challenge in GLOT is the automatic conversion of existing webpages into the GLOT language. That is, taking the HTML and CSS of a webpage and rewriting it in terms of semantical components. This research project will automate the identification of functional areas in webpages. The output of this will be used to generate the same webpages with added multilingual and accessibility capabilities.

A major outcome of this research would be the automatic generation of accessible webpages from ordinary webpages. Furthermore, specific and accurate labeling of webpage sections would greatly improve search engine's precision and open new areas for information retrieval research in general.

Norbert Zeh & Google

Bugs cause software to misbehave, sometimes with serious consequences. A software crash can cause you to lose data; vulnerable web browsers may allow others to access sensitive private information; faulty software in medical or airline equipment may even cost lives.

While software engineering techniques and rigorous testing are effective in reducing the number of bugs, they cannot eliminate them completely. A common technique used to investigate bugs in production systems is the use of crash dumps: information about the program state and relevant memory contents at the time of the crash. These memory contents are collected in raw binary form, without any information about the type of data and the values they represent, but this information is key to investing and fixing bugs.

Using information about the paths through which a program accesses certain memory locations and using knowledge about the representation of data types, these raw binary data can be annotated with the types of data and the values they represent. In the presence of bugs, however, different annotation rules may lead to conflicting annotations.

Indeed, the most common source of bugs is different parts of the program using the same memory location in conflicting ways, e.g., treating it as different types of data. The first goal of this proposal is to develop algorithms to reconcile conflicting annotations and assign a confidence score to the resulting reconciled annotations.

The second goal is to scope out a long-term research plan that focuses on developing a system that can mine large collections of crash dumps collected from millions of users of Google's Chrome browser to produce actionable bug summaries for developers explaining what the system thinks went wrong along with a list of supporting evidence collected from these crash reports and the deductive steps taken to reach this conclusion. A developer can then take such a bug summary as a starting point of efforts to fix the bug. This type of bug summarization can be performed by an expert developer, but this is time consuming. Thus, in order to effectively address the bugs reported in millions of automatically generated crash dumps, this process needs to be automated using efficient algorithms.

Hong Gu & Palomino

Palomino is a web document management company with experience in the management of health documents for the Guysborough Antigonish Strait Health Authority in Nova Scotia. Radiology reports are dictated by the doctor and speech is transcribed and stored as text.

Turning text reports into structured information will be a significant boost in the exploitation of electronic health information for population health. Such a capability will significantly enhance Palomino's expertise in the space of electronic health records. Two challenges need to be addressed to achieve a sufficiently accurate system that extracts structured data from unstructured text of radiology data. First, different types of errors in the text arising from the speech-to-text conversion need to be identified and corrected. Errors could be spelling mistakes (e.g., 'abnormaliti' as opposed to 'abnormality'), missing space or word segmentation errors (e.g., 'noevidence' as opposed to 'no evidence') or symbol errors (e.g., 'intra abdominal' as opposed to 'intra-abdominal'). Second, entities (medical conditions) and their values (e.g. normal vs. abnormal) must be extracted from the corrected text, by reference to a taxonomy of medical conditions, and processing of the text, taking into account the abbreviated nature of the text which may result in text that does not always contained fully-formed sentences in terms of syntax. Once a corpus of radiology reports has been converted into structured data, in the form of condition-value pairs, classification will be used over a large radiology report corpus to provide insights in population health.

Evangelos Milios & Analytic-OR

Analytic-OR is interested in automating the task of collecting and compiling information about planned activities (e.g. conferences, professional events, workshops) of interest to specific professional groups (e.g. health professionals) from a long and changing list of web sites containing information, in a wide variety of formats. This is an information extraction problem: algorithms and software are needed that will automatically scan the list looking for events of interest to the client professional group, and, once such an event is identified, to extract its title, dates, location, the address of the event web site, and other useful information, and compile it in a form that is easily accessible to the client. Due to the large variety of formats and the large number of web sites, a solution that avoids the custom implementation of a different script for each web site is required. An additional problem is that often the address of the event web site is wrong, and additional manual work is needed to find it.

Once an event of interest is extracted, it is useful for the client organizations to track the buzz and monitor activity around it in social media (such as Twitter and Facebook), including monitoring of active influencers, and summarize the findings in an easy to digest format, as they evolve over time. The proposed project aims to make significant progress on these challenging problems, and transfer the technology to Analytic-OR for their next generation business intelligence web service.

Peter Bodorik & NICOM

Port operations supply chains are complex as they deal with many stakeholders who have varying objectives and requirements. Port Authorities are striving to collect sufficient data from the stakeholders to improve (i) operational efficiencies, (ii) strategic decisions on asset use, maintenance, and personnel; and (iii) marketing and business development. Nicom Maritime is searching for a software solution that would enable retrieval of sufficient data from the port operations stakeholders in order to gain the benefits listed above.

The problem is that the current ICT based solutions, particularly in North America, provide the stakeholders with their own internal systems that often operate as independent silos and limit data sharing to only support the required operations. Furthermore, the stakeholders are reluctant to disclose detailed information about their operations for reasons of security and also because they are worried that their data may be obtained by their competitors who may gain insight into their operations, which can then be used against them. Thus, any data exchange with the stakeholders would need to provide not only strong security for data in transit and storage, but also privacy assurances that the data is accessed only by appropriate authorized entities and for appropriate use of the data for which it was collected. Another issue is data integration as the stakeholder systems exchange data with interfaces using different data and protocol standards.

Thus, Nicom Maritime and Dalhousie University team in this research project will address the following challenges:

1) Information retrieval from stakeholders must be performed using their existing interfaces (so that minimal modifications of the stakeholder system is needed).

2) Information retrieved form a stakeholder will be authorized and audited by the.

3) Data transfer is secure and once collected privacy of data accessed must be ensured.

4) Issues, arising due to differences in data exchange standards and protocols used by stakeholders, must be resolved.