Staff seminars

Staff seminars

Staff Seminar

Each month a member of staff gives an overview the latest developments in their field of research. Seminars are pitched at a general level.

SOFTWARE LANGUAGE ENGINEERING AT MACQUARIE

== When: 26 August, 3--4pm

== Where: E6A357

== Speaker: Anthony Sloane

== Abstract:
Software Language Engineering (SLE) is the study of formal techniques that assist in the development and use of languages for software development.

This talk will present an overview of SLE research at Macquarie. I will describe SLE applications from standard programming language compilers to the Skink program verifier that is under development at Macquarie. I will illustrate the interesting SLE problems that arise in building such applications and describe how we are solving them. In particular, I will give overviews of Macquarie SLE tools including the Kiama language processing library and the sbt-rats parser generator.

Seminars Held in 2016


HUMAN INFORMATION PROCESSING & GESTURAL COMMUNICATION IN VIRTUAL REALITY

When: 27 May, 3--4pm

Where: E6A357

Speaker: Manolya Kavakli

Abstract:  This seminar reviews Human-Computer Interaction (HCI) from both technological and human point of view and focuses on a number of research studies conducted at the Virtual Reality Lab to discuss the importance of interdisciplinary collaboration and the contributions from psychology, design, human factors, computer science, and engineering. The studies to be presented in this seminar include an ARC Discovery Grant, a postdoctoral fellowship, a PhD, and 3 MIT projects exploring the same issue from various points of view in different scales. This may give some ideas to the Computing academics to structure their work for the next semester’s MIT projects (itec810 and itec812). HCI refers to the design and implementation of computer systems that people interact with. It has human in its core but requires the design of interaction of human with computer technology. To design an appropriate interaction model, one needs to know how humans process information and if there are differences in human information processing architectures from one person to another. The acceptance of technology is highly dependent on the way we process information. We conducted a number of experimental studies on human information processing. Our findings state that there are differences in information processing and gestural communication between novices and experts, Anglo-Celtics and Latin cultures, as well as males and females. Humans use different cognitive architectures for information processing and user demographics should be taken into account in interface design. In this seminar, we will review these background studies (outlined below) investigating the human factors for the design of ubiquitous systems, demonstrate the differences between human information processing, especially in the integration of speech and hand gestures, and discuss where the biggest challenges remain for the development of multimodal system design. The development of user-adaptive systems may increase the acceptance of IT and facilitate the design of multimodal systems accommodating the requirements for different expertise levels, cultural backgrounds and gender groups. The integration of speech and hand gestures is an important research problem in multimodal interface design. Our association with IT may be driven not only by the way we identify ourselves with the technology, but also by the differences in the way we process information.


ReputationPro: Efficient Contextual Transaction Trust Computation in E-Commerce Environments

Friday 29 April 2016, 3--4pm, A/Prof. Yan Wang

In e-commerce environments, the trustworthiness of a seller is utterly important to potential buyers, especially when a seller is not known to them. Most existing trust evaluation models compute a single value to reflect the general trustworthiness of a seller without taking any transaction context information into account. With such a result as the indication of reputation, a buyer may be easily deceived by a malicious seller in a transaction where the notorious value imbalance problem is involved—in other words, a malicious seller accumulates a high-level reputation by selling cheap products and then deceives buyers by inducing them to purchase more expensive products.

In this talk, we first present a trust vector consisting of three values for contextual transaction trust (CTT). In the computation of CTT values, three identified important context dimensions, including Product Category, Transaction Amount, and Transaction Time, are taken into account. In the meantime, the computation of each CTT value is based on both past transactions and the forthcoming transaction. In particular, with different parameters specified by a buyer regarding context dimensions, different sets of CTT values can be calculated. As a result, all of these trust values can outline the reputation profile of a seller that indicates the dynamic trustworthiness of a seller in different products, product categories, price ranges, time periods, and any necessary combination of them. We name this new model ReputationPro. Nevertheless, in ReputationPro, the computation of reputation profile requires new data structures for appropriately indexing the precomputation of aggregates over large-scale ratings and transaction data in three context dimensions, as well as novel algorithms for promptly answering buyers’ CTT queries.

To solve these challenging problems, we first propose a new index scheme CMK-tree by extending the two-dimensional K-D-B-tree that indexes spatial data to support efficient computation of CTT values. Then, we further extend the CMK-tree and propose a CMK-treeRS approach to reducing the storage space allocated to each seller. The experimental results illustrate that the CMK-tree is superior in efficiency of computing CTT values to all three existing approaches in the literature. In particular, while answering a buyer’s CTT queries for each brand-based product category, the CMK-tree has almost linear query performance. In addition, with significantly reduced storage space, the CMK-treeRS approach can further improve the efficiency in computing CTT values. Therefore, our proposed ReputationPro model is scalable to large-scale e-commerce Web sites in terms of efficiency and storage space consumption.


Selected Information Systems research approaches, with a focus on the Knowledge Management characteristics of specific organisations

Friday, 1 April 2016, 3-4pm, Dr. Peter Bush

Information Systems (IS) research is not only social science in nature, it also frequently draws upon a range of methods used in other disciplines, e.g. sociology, psychology, business, management etc. Thistalk will commence with a discussion of the epistemologies or philosophies guiding IS research, as well as selected methodologies applicable to these epistemologies. Following on from this there will be discussion of knowledge management research specifically with regard to organisational learning and knowledge diffusion. Also covered will be aspects of knowledge management issues in the university environment involving Business Process Management (BPM) and Social Network Analysis (SNA), and also work done recently with regard to student understanding of plagiarism, as well as factors affecting offer-acceptance of places. Finally there will be some discussion of parameters influencing cloud computing adoption by Australian Small and Medium Sized Enterprises (SMEs) - representing roughly 95% of Australian organisations.


Seminars Held in 2015


New approaches on archaeology. A choice of recent case studies from Egypt and at Macquarie University

Friday 28th August 2015, 3pm, Dr Yann Tristant and Michael Rampe

Archaeology is the study of past cultures and the way people lived based on the things they left behind such as pottery, a skeleton buried in the ground or the remains of large stone temple. Since the beginning of the year, the Department of Ancient History has launched a Bachelor of Archaeology in which students can combine the same degree core units in Archaeology and a major from the Faculty of Science and Engineering, including two from the Department of Computing. Through several case studies taken from current archaeological projects conducted in Egypt by Dr Yann Tristant, this lecture aims to show the potential of research collaboration projects for staff and students that emerges from the field of archaeology. It will also highlight the collaboration between the Learning and Teaching Centre and the Department of Ancient History. They have been pioneering work in 3D scanning, printing and web delivery for educational purposes. We have developed a low-cost capability to scan, deliver and manipulate 3D imagery over the web using very low bandwidth and resource requirements. The project also integrates with other technology initiatives such as the use of 3D Printing technology to produce facsimiles of ancient historical artefacts or anthropological specimens to name just a few.

The speakers.

Dr Yann Tristant studied Egyptology and Prehistory at the École du Louvre and the University of La Sorbonne in France. He received his PhD in 2006 on the basis of a dissertation on the Nile Delta during the Predynastic and Early Dynastic period. Dr Tristant was a scientific fellow of the French Institute of Archaeology in Cairo (IFAO) from 2006 to 2010. His main fields of expertise are Egyptian archaeology and society as well as Pre- and Early Dynastic Egypt. He has worked on a number of sites in various parts of Upper and Lower Egypt as well as at oases. Currently he is in charge of excavations at Abu Rawash (Memphite area) and Dendera (Upper Egypt), Wadi Araba (Eastern Desert), where he is undertaking an archaeological survey.

Michael Rampe is an Educational Designer at the Learning & Teaching Centre, Macquarie University. He has been pioneering work in 3D scanning, printing and web delivery for educational purposes and will share the results of this work during this lecture.


Entropy farm: how to measure dangerous information leaks

Friday 3 July 2015, 3pm, Prof. Annabelle McIver

In computer security, it is frequently necessary in practice to accept some leakage of confidential information.
This motivates the development of theories of Quantitative Information Flow aimed at showing that some leaks are “small” and therefore tolerable.

In this talk I will present a survey of recent developments concerning how to measure the severity of information leaks based on new definitions of entropy.


Fine-grained resource sharing for cloud data centre efficiency

Fri 29th May 2015, 3pm, Dr. Young Choon Lee

Resource sharing using hardware virtualization has become increasingly common for cloud data centre efficiency. Such virtualization allows multiple workloads to share a common set of resources in a single physical machine. In practice, however, these co-located workloads often compete for resources leading to their resource usage being non-isolable and intrusive. This intrusive resource sharing is a major source of cloud data centre inefficiency. In this talk, I’ll discuss fine-grained resource allocation and scheduling solutions that enable co-located workloads to organically use resources. These solutions exploit the heterogeneity and dynamicity of cloud data centres that are often perceived as the main hurdles of resource management.


Text mining for evidence based medicine

Fri 1st May 2015, 3pm, Dr. Diego Molla Aliod

In this talk I will present some of the challenges that the medical practitioner faces in the practice of Evidence Based Medicine, and I’ll survey some of the text mining methods that we are applying to attempt to solve these problems. A crucial aspect of Evidence Based Medicine is the need to incorporate the current available clinical evidence at point of care. However, the physician is currently overwhelmed by the large volumes of published clinical studies and she cannot keep up to date with the latest clinical evidence.  We have gathered a corpus of clinical questions and evidence basedanswers sourced from the Journal of Family Practice. We are using this corpus to determine effective methods to find, extract, appraise and present clinical evidence. Given a clinical question and a list of relevant documents, we use clustering methods to group the medical articles into the key components of the answer (e.g. alternative treatments to a condition). We use statistical classifiers to appraise the quality of the evidence in each cluster. And we have developed text summarisation systems that output the specific contribution of each article.


Verification and control of critical systems

Fri 27th March 2015, 3pm, A/Prof Franck Cassez

In this talk, I will present some models, techniques and tools to formally prove qualitative and quantitative  properties of safety critical systems. I will discuss some examples among them the synthesis of an oil pump controller  and static analysis of C programs.

Seminars Held in 2014


Controlled natural languages

Fri 7 November 2014, 3pm, Dr Rolf Schwitter

Controlled natural languages are simplified forms of natural languages; they are constructed from natural languages by restricting the size of the grammar and the vocabulary in order to reduce or eliminate ambiguity and complexity.

In this talk, I will first give a short introduction to controlled natural languages and show how these languages have been used to: improve the communication between humans, ameliorate technical documentation, improve machine translation, and represent formal notations in a seemingly informal way. I will then focus on two research projects where I have used controlled natural language as a high-level interface language for semantic systems: one to support human situation awareness, and the other to improve the representation of business rules. Finally, I will outline some technical details about the controlled language processing system that I am currently developing; a system that facilitates the writing of specifications in controlled language and translates these specifications into executable answer set programs.


Narrative AI: The challenges of building an intelligent storyteller

Fri 10 October 2014, 3pm, Dr Malcolm Ryan

Stories play an important part in our lives. They shape our memories, our conversation, our entertainment and the way we learn. An effective story is engaging and memorable, but a poorly told story leaves us confused and unsatisfied. Narrative theorists and psychologists have striven to understand what makes for a good story. In recent years, artificial intelligence researchers have joined their side, in an attempt to to codify the basic structure of narrative in terms that can be understood and employed by a computer.

Narrative AI (or computational storytelling) is a project to write programs that can understand and generate stories. The aim of this project is not to supplant human authors, but to empower them – in much the same way that CGI tools empower visual artists. The rules of narrative, however, are much harder to program than those of vision, as they tend to be much more subjective. Nevertheless there are simple rules of coherency (in terms of action, character and plot) which distinguish a meaningful story from nonsense.

In this presentation I will discuss some of the particular problems in the field of Narrative AI and some of the progress my students and I have recently made towards addressing these problems.


Simulations, Virtual Worlds and Intelligent Virtual Agents for education, training and health

Fri 8th August 2014, 3pm, Prof Deborah Richards

This talk will provide an overview of the types and uses of simulations and virtual worlds with a major focus on Intelligent virtual agents. Intelligent virtual agents (IVAs) have been a growing area of research within the field of Artificial Intelligence in the past 20 years. An IVA is a piece of software, generally considered to be autonomous in some way, that imitates the behaviour of a human or animal and is embodied within a virtual environment. A primary aim in the field of virtual agents is the creation of believable characters that are useful in their situated paradigm (e.g. games, narratives, education, assistive computing, etc.). There is a significant body of work in the area of believable characters which may be known as pedagogical agents, embodied conversational agents, artificial companions, talking heads, empathic or listening agents depending on their function, level of sophistication or the particular research focus such as emotion and appraisal systems or language technology. The talk will focus on “relational agents” that have been shown to achieve better health outcomes for patients with low computer, reading and health literacy skills and can build long-term socio-emotional relationships with users, including trust, rapport and therapeutic alliance, for the purpose of enhancing adherence to treatment.

The talk will provide an overview of the field, including my research concerning IVAs and memory, emotions and collaborative learning for applications such as debriefing and reminiscing, border security officer training, scientific inquiry and science education, real estate assistance, museum guidance, and most recently, bedwetting in children.


Supporting reproducible research on language

Friday 13 June 2014, 3pm, A/Prof Steve Cassidy

In this talk, I’ll give an overview of the work I do on managing collections of language data to support researchers interested in many aspects of human communication. Modern language research revolves around collections of text, speech and video that can be mined for interesting phenomena or used to train or test new algorithms. We are currently in transition from a world where these collections are exchanged on physical media to one where they are managed in centralised, online repositories. My work has focused on the design of these repositories and solving some of the problems of data representation and processing that arise when we try to unify a diverse set of resources in a common technical framework. One of the goals of this work is to provide an archival reference collection that can support reproducible research and allow publication of references to the data that is used in a study.

This talk will look at our work on the Australian National Corpus and the Alveo Virtual Laboratory. These are major initiatives that bring together a number of Australian collections and provide online access to them. I’ll talk about some of the data representation problems I’m interested in; in particular how to represent annotations on linguistic data and how to design and implement query systems that are useful to language researchers. I’ll also touch on some of the issues raised when you ask researchers to share their data and the access control framework we’ve developed to address them.


Using “Life Story” models to improve text data mining

Fri 9th May 2014, 3pm, Mark Johnson

The goal of named entity linking and relation extraction is to extract information from large text document collections and store it in a database. I’ll explain how this text data mining problem can be attacked using natural language processing and machine learning techniques. Then I’ll describe how we’re planning to use such databases to build “Life Story” models of the sequence of events that occur in an individual’s life, and how this information might be used to further improve text data mining. Finally, I’ll present some experimental results suggesting that even very simple background information about individuals extracted from such a database can markedly improve relation extraction accuracy.

(This is joint work with Anish Kumar and Lan Du.)


Compute it!

Friday 11 April 2014, 3pm, Christophe Doche

Computations are an effective way to tackle a range of theoretical problems. For instance, computations are useful to produce data that may increase our understanding of a problem. This can ultimately contribute to the proof of an abstract result. I like to work on research questions where computations are at the heart of the problem, where computing a particular quantity or object is the problem.

I will go through several examples, focusing mainly on an object called an elliptic curve that has destructive and constructive applications in cryptography. The last part of the talk will be dedicated to some recent work on the optimality of double-base chains. A double-base chain is useful to perform a scalar multiplication. Given a point P on an elliptic curve and an integer n, the result of this operation is another point on the curve denoted by [n]P. This operation is crucial in elliptic curve cryptography. With a double-base chain, it is possible to compute [n]P by representing the integer n using two bases: 2 and 3. We are particularly interested in algorithms producing expansions of n with a minimal number of terms. Indeed this representation of n leads to a scalar multiplication that is much faster than its counterpart relying on the binary representation of n.

The algorithm relies on a generalisation of some previous work of Paul Erdos and John Loxton.

Slides of the talk are available here.


Bridging persistent data and process data

Friday 7 March, 3pm, Jian Yang

Business processes (BPs) can be designed using a variety of modelling languages and executed in different workflow systems. Current modelling languages cannot capture sufficient semantics of BPs for runtime monitoring, analysis, and management, e.g., some important information concerning execution states and/or data is managed in an ad hoc manner and often embedded deeply in execution engines. In most BPM applications, the semantics of BPs needed for runtime management is scattered across BP models, execution engines, and auxiliary stores of workflow systems. The inability to capture such semantics in BP models is the root cause for many BPM challenges. In this talk, we will introduce a new artefact centric approach for BP modelling that aims to address the issues mentioned above.

We like to address one particular problem, i.e, an important omission in modelling of data & access for a business workflow including relationship of the workflow data and the persistent data in the underlying enterprise database(s). Two new notions are formulated: (1) Updatability allows each update on a business entity (or database) to be translated to updates on the database (or resp. business entity), a fundamental requirement for workflow implementation. (2) Isolation reflects that updates by one workflow execution do not alter data used by another running workflow. This property provides an important clue in workflow design. Decision algorithms for updatability and isolation are also presented.

Back to the top of this page