Developing Interpretable Ubiquitous AI and Apps
by leveraging Machine Learning and Human-Computer Interaction.
ABOUT US
The NUS Ubicomp Lab researches and develops Explainable AI-driven analytics and apps to improve people’s lives. We are interested in combining Machine Learning and Human-Computer Interaction to improve health, wellness and livability in smart cities with interpretable predictive analytics and mobile apps for automated self-tracking.
AI + HCI
User-Centric Explainable AI
We investigate user requirements and explanation algorithms to help users to understand and trust the increasingly ubiquitous Artificial Intelligence.
Apps for Health Behavior Change
We investigate socio-technical context-aware applications to sense behavior and promote pro-health and sustainability behaviors with ubiquitous mobile apps and wearables.
User Behavior Analytics and Visualizations
We apply data mining and visualization techniques to understand user, population, and urban behaviors.
Health
Wellness
Cities
Some of our works
HIGHLIGHTED PROJECTS
HIRING
We are actively looking for highly motivated and talented postdoc research fellows, PhD, Masters, Undergraduate students to work in the areas of explainable AI, applications of deep learning, human-computer interaction, ubiquitous / pervasive computing, internet-of-things and sensors, data analytics and data visualization.
If you are a prospective PhD student, please check out details of the NUS Computer Science PhD programme and apply online. If you are interested in working with our lab, please email your CV and transcript!
Interactive Explainable AI – Postdoc, PhD Student
The prevalence and ubiquity of deep learning and AI in society is driving the need for their responsible use. To make AI more trustworthy, it needs to be explainable, privacy-preserving, and human-centered. While much recent research on Explainable AI (XAI) has provided many explanation techniques, they remain unusable for end users and domain experts. Therefore, this project aims to develop novel explainable AI algorithms and evaluation methods to improve the usability and usefulness of AI.
We are looking for talented candidates to join our multidisciplinary team. The project is investigating computer vision, artificial intelligence, data visualization and human-computer interaction to develop effective human-AI collaboration and explainable AI.
Expected Skills:
- For PhD candidates: Masters or Bachelors in Computer Science, Electrical Engineering or related disciplines
- For Postdoc candidates: PhD in Computer Science, Electrical Engineering or related disciplines with a background in human-computer interaction and cyber-physical systems
- Expertise in computer vision, machine learning, human-computer interaction, machine learning, and/or data visualization is highly desirable
- Competency in developing and implementing algorithms, and programming
- Excellent writing and presentation skills
- Ability to work independently (50%) and team projects (50%)
To apply, please send your research statement, CV and names of 3 referees (name, institution, email) to Prof. Brian LIM (brianlim@nus.edu.sg). Only shortlisted candidates will be contacted.
More job descriptions.
OUR TEAM

Mario MICHELESSA
Phd Student

Louth Bin RAWSHAN
PhD Student

Gucheng WANG
PhD Student, co-advised with A/Prof Terence SIM

CHEN Yihe
Research Engineer

LIN Geyu
Masters

YU Zhecheng
Undergraduate

Jolyn LOH
Undergraduate

James TAN
Undergraduate

AHN Yehoon
Undergraduate
Our Alumni
We have advised students from a wide range of disciplines (computer science, electrical engineering, design) and
across many education levels (high school, undergraduate, masters, PhD students). See our alumni.
LATEST NEWS
Paper on Relatable Explainable AI published in ACM CHI 2022 and received Best Paper Award
Explainable AI is important to build trustworthy and understandable systems. To aid understanding, explanations need to be relatable, but current techniques remain overly technical with obscure information. Drawing from the perceptual processing theory of human...
Paper on Debiasing misleading explanations of obfuscated or corrupted images published in ACM CHI 2022
Just as AI performance degrades with data corruptions, we found that so does explanation faithfulness. For example, blurring an image can provide privacy, but this leads to heatmap explanations becoming spurious and highlight the wrong objects about the prediction. We...
Paper on Increasing Diversity in Crowd Ideation with Explainable AI published in ACM CHI 2022
Previously, we improved creativity by automatically directing the crowd with diverse prompts Directed Diversity (Cox et al., CHI'21). Here, we further improve creativity by providing real-time feedback on the ideations. We propose Interpretable Directed Diversity to...
Paper on the Privacy Risk of AI Explanations published at ICCV 2021
The successful deployment of artificial intelligence (AI) in many domains from healthcare to hiring requires their responsible use, particularly in model explanations and privacy. Explainable artificial intelligence (XAI) provides more information to help users to...
Paper on Showing or Suppressing Uncertainty in Model Explanation published in Artificial Intelligence and presented at IJCAI 2021
Feature attribution is widely used to explain how influential each measured input feature value is for an output inference. However, measurements can be uncertain, and it is unclear how the awareness of input uncertainty can affect the trust in explanations. We...
Paper on Increasing Diversity in Crowd Ideation with Language Model Embedding published in ACM CHI 2021
Crowd sourcing can obtain many creative ideas, but without careful coordination, many ideators may generate the same ideas. This leads to redundancy and limits the diversity of ideas from the crowd.To mitigate this redundancy, we propose Directed Diversity (DD), that...
Paper on Interpretable Sorting of Multiple Attributes published at TVCG
Consider searching for cheap hotel at a good location on a travel website. You can sort hotels by price, but the locations can either be near or far from your desired location. Conversely, you can sort by distance, but the prices will vary wildly. Why can't we have...
Paper on Analyzing Population Step Count published at PACTM IMWUT Vol. 3 and won Distinguished Paper Award
Our paper on the city-scale data analysis of population step count behavior has won the Distinguished Paper Award in PACM IMWUT Vol. 3. Among 6 awarded from 166 total papers published.With this, we hope more researchers can learn how to deeply analyze surprisingly...
Paper on modeling cognitive load in explainable AI published in ACM CHI 2020
Explanations of artificial explanations can be simplified by controlling the number of variables, but the complexity in how they are visualized can still impede quick interpretation. We quantitatively modeled the cognitive load of machine learning model explanations...
Welcome new Postdoc Yunlong WANG
Let us welcome Dr. Yunlong Wang as a post-doctoral research fellow to our lab! Yunlong obtained his PhD in Computer Science from the HCI group in the University of Konstanz (Germany). During his PhD, he focused on designing digital health interventions for sedentary...
Paper on making OD Bundling Visualizations less misleading accepted to IEEE TVCG
OD Bundling is a popular technique to visualize key patterns in movement flows, but the curves illustrated can be misleading by suggesting paths that do not match true or plausible trajectories. We present an OD Morphing, to allow traffic and urban planners to...
XAI Framework of Reasoned Explanations – CHI 2019 Presentation and Interactive Tutorial
We will be presenting our work on the conceptual XAI Framework of Reasoned Explanations at CHI 2019 next week on Tuesday! Come to our talk if you are in Glasgow! Updated: Presentation at CHI Danding gave a great presentation on Tuesday morning. Watch it online here!...
Welcome new PhD student ZHANG Wencan
Let us welcome Zhang Wencan as a new PhD student to our lab! Wencan He received M.S & B.S degrees from EE department at Shanghai Jiao Tong University. His research interests include context-aware sensing and activity recognition. He enjoys sports(badminton),...
Paper on Human-centric XAI Reasoning Framework has been accepted at CHI 2019
Which explanations should AI provide? We identified pathways to tailor explanation techniques based on theories of human reasoning processes.Our paper on a framework for human-centric explanations for XAI has been accepted to CHI 2019! Congratulations to team members...
IUI 2019 Second Workshop on Explainable Smart Systems (ExSS)
We are co-organizing the second workshop on Explainable Smart Systems at IUI 2019. If you are interested to interact with HCI, design, and AI researchers to enhance the understandability and trustworthiness of AI, please join us! More information at...
PUBLICATIONS
2025
- A Nuthalapati, N Hinds, BY Lim, Q Wang. VIS 2025. Enhancing XAI Interpretation through a Reverse Mapping from Insights to Visualizations.
- M Michelessa, J Ng, C Hurter, BY Lim. 2025. Varif.ai to Vary and Verify User-Driven Diversity in Scalable Image Generation. In Proceedings of the 2025 ACM Designing Interactive Systems Conference, 1867-1885
- Brian Y. Lim, Joseph P. Cahaly, Chester Y. F. Sng, Adam Chew. 2025. Diagrammatization and Abduction to Improve AI Interpretability with Domain-Aligned Explanations for Medical Diagnosis. In Proceedings of the international Conference on Human Factors in Computing Systems (CHI ’25).
- Harshavardhan Abichandani, Wencan Zhang, and Brian Y. Lim. 2025. Robust Relatable Explanations of Machine Learning with Disentangled Cue-specific Saliency. In Proceedings of the 30th International Conference on Intelligent User Interfaces (IUI 2025).
2024
- Ayrton San Joaquin, Bin Wang, Zhengyuan Liu, Nicholas Asher, Brian Y. Lim, Philippe Muller, and Nancy F. Chen. 2024. In2Core: Leveraging Influence Functions for Coreset Selection in Instruction Finetuning of Large Language Models. In Findings of the Association for Computational Linguistics (EMNLP 2024).
- Qihao Liang, Xichu Ma, Finale Doshi-Velez, Brian Lim, and Ye Wang. 2024. XAI-Lyricist: Improving the singability of AI-Generated lyrics with prosody explanations. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24. (pp. 7877-7885). IJCAI, Human-Centred AI.
- Yu Liu, Noah R. Sundah, Nicholas R. Y. Ho, Wan Xiang Shen, Yun Xu, Auginia Natalia, Zhonglang Yu, Ju Ee Seet, Ching Wan Chan, Tze Ping Loh, Brian Y. Lim, and Huilin Shao. 2024. Bidirectional linkage of DNA barcodes for the multiplexed mapping of higher-order protein interactions in cells. Nature Biomedical Engineering 8, 909–923 (2024).
- Eura Nofshin, Esther Brown, Weiwei Pan, Brian Y. Lim, Finale Doshi-Velez. 2024. A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning. ICML 2024 Workshop on the Next Generation of AI Safety.
- Jessica Y. Bo, Pan Hao, and Brian Y. Lim. 2024. Incremental XAI: Memorable Understanding of AI with Incremental Explanations. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’24.
2023
- Kary Främling, Brian Y. Lim, & Katharina J. Rohlfing. 202). Social Explainable AI: Designing multimodal and interactive communication to tailor human–AI collaborations. NNI Shonan Meeting Report, No. 200.
- Yunlong Wang, Shuyuan Shen, and Brian Y. Lim. 2023. RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’23.
- Mario Michelessa, Christophe Hurter, Brian Y. Lim, Jamie Ng Suat Ling, Bogdan Cautis, and Carol Anne Hargreaves. 2023. Visual Explanations of Differentiable Greedy Model Predictions on the Influence Maximization Problem. In Big Data and Cognitive Computing 7, no. 3 (2023): 149.
- Hitoshi Matsuyama, Nobuo Kawaguchi, and Brian Y. Lim. 2023. IRIS: Interpretable Rubric-Informed Segmentation for Action Quality Assessment. In Proceedings of the 28th International Conference on Intelligent User Interfaces (IUI 2023).
- Yan Lyu, Hangxin Lu, Min Kyung Lee, Gerhard Schmitt, and Brian Y. Lim. 2023. IF-City: Intelligible Fair City Planning to Measure, Explain and Mitigate Inequality. IEEE Transactions on Visualization and Computer Graphics (TVCG).
2022
- Wencan Zhang and Brian Y. Lim. 2022. Towards Relatable Explainable AI with the Perceptual Process. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’22 . Best Paper Award (Top 1%).
- Wencan Zhang, Mariella Dimiccoli and Brian Y. Lim. 2022. Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’22.
- Yunlong Wang, Priyadarshini Venkatesh, Brian Y. Lim. 2022. Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’22.
2021
- Yunlong Wang, Jiaying Liu, Homin Park, Jordan Schultz-McArdle, Stephanie Rosenthal, Brian Y. Lim. 2021. SalienTrack: providing salient information for semi-automated self-tracking feedback with model explanations. arxiv.org/abs/2109.10231.
- Xuejun Zhao, Wencan Zhang, Xiaokui Xiao, and Brian Y. Lim. 2021. Exploiting Explanations for Model Inversion Attacks. 2021 IEEE International Conference on Computer Vision (ICCV).
- Samuel R. Cox, Yunlong Wang, Ashraf Abdul, Christian von der Werth, Brian Y. Lim. 2021. Directed Diversity: Leveraging Language Embedding Distances for Collective Creativity in Crowd Ideation. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’21.
- Danding Wang, Wencan Zhang, Brian Y. Lim. 2021. Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations. Artificial Intelligence.
2020
- Yan Lyu, Fan Gao, I-Shuen Wu, and Brian Y. Lim. 2020. Imma Sort by two or more attributes with Interpretable Monotonic Multi-Attribute Sorting. IEEE Transactions on Visualization and Computer Graphics (TVCG).
- Guang Jiang, Mengzhen Shi, Pengcheng An, Ying Su, Yunlong Wang, and Brian Y. Lim. 2020. NaMemo: Enhancing Lecturers’ Interpersonal Competence of Remembering Students’ Names. In Companion Publication of the 2020 ACM on Designing Interactive Systems Conference. DIS Workshop ’20.
- Leye Wang, Daqing Zhang, Dingqi Yang, Brian Y. Lim, Xiao Han, Xiaojuan Ma. 2020. Sparse Mobile Crowdsensing With Differential and Distortion Location Privacy. In IEEE Transactions on Information Forensics and Security.
- Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y. Lim. 2020. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’20.
2019
- Yan Lyu, Victor CS Lee, Joseph Kee-Yin Ng, Brian Y. Lim, Kai Liu, Chao Chen. 2019. Flexi-Sharing: A Flexible and Personalized Taxi-Sharing System. In IEEE Transactions on Vehicular Technology.
- Heidi Fuchs, Arman Shehabi, Mohan Ganeshalingam, Louis-Benoit Desroches, Brian Lim, Kurt Roth, Allen Tsao. 2019. Comparing datasets of volume servers to illuminate their energy use in data centers. Energy Efficiency, 1-14.
- Yan Lyu, Xu Liu, Hanyi Chen, Arpan Mangal, Kai Liu, Chao Chen, and Brian Y. Lim. 2019. OD Morphing: balancing simplicity with faithfulness for OD bundling. In IEEE Transactions on Visualization and Computer Graphics (TVCG).
- Brian Y. Lim, Judy Kay, and Weilong Liu. 2019. How does a nation walk? Interpreting large-scale step count activity with weekly streak patterns. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT). (IMWUT Vol. 3 Distinguished Paper Award (Top 6 of 166)).
-
Jo Vermeulen, Brian Y. Lim, Mirzel Avdic, Danding Wang, and Ashraf Abdul. 2019. The Curious Case of Providing Intelligibility for Smart Speakers. In CHI 2019 Workshop on Where is the Human? Bridging the Gap Between AI and HCI.
- Homin Park, Homanga Bharadhwaj, and Brian Y. Lim. 2019. Hierarchical Multi-Task Learning for Healthy Drink Classification. In Proceedings of the International Joint Conference on Neural Networks (IJCNN).
- Brian Y. Lim, Qian Yang, Ashraf Abdul, and Danding Wang. 2019. Why these Explanations? Selecting Intelligibility Types for Explanation Goals. In IUI 2019 Second Workshop on Explainable Smart Systems (ExSS 2019).
- Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’19.
- Zhutian Yang, Eng Hooi Tan, Yingda Li, Brian Y. Lim, Michael Patrick Metz, and Tze Ping Loh. 2019. Relative criticalness of common laboratory tests for critical value reporting. In Journal of Clinical Pathology.
2018
- Jiangtao Wang, Feng Wang, Yasha Wang, Daqing Zhang, Brian Y. Lim, and Leye Wang. 2018. Allocating Heterogeneous Tasks in Participatory Sensing with Diverse Participant-Side Factors. In IEEE Transactions on Mobile Computing (TMC).
- Kai Lukoff, Taoxi Li, Yuan Zhuang, and Brian Y. Lim. 2018. TableChat: Mobile Food Journaling to Facilitate Family Support for Healthy Eating. In Proceedings of the ACM Conference on Computer Supported Cooperative Work. CSCW ’18.
- Homanga Bharadhwaj, Homin Park, Brian Y. Lim. 2018. RecGAN: Recurrent Generative Adversarial Networks for Recommendation Systems. In Proceedings of the ACM Conference on Recommender Systems. RecSys ’18.
- Heidi Fuchs, Arman Shehabi, Mohan Ganeshalingam, Louis-Benoit Desroches, Brian Y. Lim, Kurt Roth, and Allen Tsao. 2018. Characteristics and Energy Use of Volume Servers in the U.S. In ACEEE Summer Study 2018.
- Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim (corresponding author), Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the international Conference on Human Factors in Computing Systems. CHI ’18.
- Brian Y. Lim, Danding Wang, Tze Ping Loh, and Kee Yuan Ngiam. 2018. Interpreting Intelligibility under Uncertain Data Imputation. In ACM IUI 2018 Workshop on Explainable Smart Systems (ExSS 2018).
- Brian Y. Lim, Alison Smith, Simone Stumpf. 2018. ExSS 2018: Workshop on Explainable Smart Systems. In Workshop at ACM IUI 2018.
- Homin Park, Zhenkai Wang, Abhinav Ramesh Kashyap, Brian Y. Lim. 2018. Biases in Food Photo Taking Behavior. ACM CHI 2018 Workshop on Designing Recipes for Digital Food Lifestyles.
2017
- Lim, B. Y., Chng, X., Zhao, S. 2017. Trade-off between Automation and Accuracy in Mobile Photo Recognition Food Logging. In Proceedings of the Fifth International Symposium on Chinese CHI.
- Lim, B. Y., Ayalon, O., Toch, E. 2017. Reducing Communication Uncertainty with Social Intelligibility: Challenges and Opportunities. ACM CHI 2017 Workshop on Designing for Uncertainty in HCI.
- Fuchs, H., Shehabi, A., Ganeshalingam, M., Desroches, L. B., Lim, B., Roth, K., Tsao, A. 2017. Characteristics and Energy Use of Volume Servers in the United States. Technical Report by the Lawrence Berkeley National Laboratory (LBNL).
2016
- Leye Wang, Daqing Zhang, Dingqi Yang, Brian Y. Lim, and Xiaojuan Ma. 2016. Differential Location Privacy for Sparse Mobile Crowdsensing. In IEEE International Conference on Data Mining 2016.
2015
- Urban, B., Shmakova, V., Lim, B. Y., Roth, K. 2015. Residential Consumer Electronics Energy Consumption in the United States. In Energy Efficiency in Domestic Appliances and Lighting 2015.
- Urban, B., Shmakova, V., Lim, B. Y., Roth, K. 2015. Energy Consumption of Consumer Electronics in U.S. Homes in 2013. Final Report to the Consumer Electronics Association (CEA) by Fraunhofer USA.
2014
- Lim, B. Y., Roth, K., Nambiar, S., Rayakota, H. 2014. Rapid Prototyping of Energy Management Applications with FRESH. In ACEEE Summer Study 2014.
- Roth, K., Shmakova, V., Urban, B., Lim, B. Y. 2014. Residential Consumer Electronics Energy Consumption in 2013. In ACEEE Summer Study 2014.
2013
- Lim, B. Y., Dey, A. K. 2013. Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application. In Human-Computer Interaction. Towards Intelligent and Implicit Interaction. Springer Berlin Heidelberg, 2013. 92-101.
- Lim, B. Y., Roth, K., Nambiar, S., Rayakota, H. 2013. FRESH: The Fraunhofer Experimental Smart Home Research Platform for Home Energy Management Applications. In MIT Energy Night 2013.
2012
- Lim, B. Y., Dey, A. K. 2012. Weights of Evidence for Intelligible Smart Environments. ACM Ubicomp 2012 Workshop on Adaptable Service Delivery in Smart Environments.
- Lim, B. Y. 2012. Improving understanding and trust with intelligibility in context-aware applications. CMU PhD Dissertation.
- Lim, B. Y., Dey, A. K. 2012. Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application. CMU-HCII Technical Report.
- Lim, B. Y. and Dey, A. K. 2012. Field Evaluation of IM Autostatus, an Intelligible Context-Aware Application. CMU-HCII Technical Report.
2011
- Lim, B. Y., Dey, A. K. 2011. Investigating Intelligibility for Uncertain Context-Aware Applications. In Proceedings of the 13th international conference on Ubiquitous computing (UbiComp ’11). ACM, New York, NY, USA, 415-424. DOI=10.1145/2030112.2030168 .
- Lim, B. Y., Dey, A. K. 2011. Design of an Intelligible Mobile Context-Aware Application. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI ’11). ACM, New York, NY, USA, 157-166. DOI=10.1145/2037373.2037399
- Lim, B. Y., Shick, A., Harrison, C., Hudson, S. E. 2011. Pediluma: Motivating Physical Activity Through Contextual Information and Social Influence. In Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction (TEI ’11). ACM, New York, NY, USA, 173-180.
- Vermeulen, J., Lim, B. Y., Kawsar, F. 2011. Pervasive Intelligibility: Workshop on Intelligibility and Control in Pervasive Computing. Pervasive 2011 Workshop on Intelligibility and Control in Pervasive Computing.
2010
- Lim, B. Y., Brdiczka, O., Bellotti, V. 2010. Show Me a Good Time: Using Content to Provide Activity Awareness to Collaborators with ActivitySpotter. In Proceedings of the 16th ACM international conference on Supporting group work (GROUP ’10). ACM, New York, NY, USA, 263-272.
- Lim, B. Y., Dey, A. K. 2010. Toolkit to Support Intelligibility in Context-Aware Applications. In Proceedings of the 12th ACM international Conference on Ubiquitous Computing (Copenhagen, Denmark, September 26 – 29, 2010). Ubicomp ’10. ACM, New York, NY, 13-22.
2009
- Lim, B. Y., Dey, A. K. 2009. Assessing Demand for Intelligibility in Context-Aware Applications. In Proceedings of the 11th international Conference on Ubiquitous Computing (Orlando, Florida, USA, September 30 – October 03, 2009). Ubicomp ’09. ACM, New York, NY, 195-204.
- Lim, B. Y., Dey, A. K., Avrahami, D. 2009. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI ’09. ACM, New York, NY, 2119-2128. Best Paper Honourable Mention (Top 5%).
- Harrison, C., Lim, B. Y., Shick, A., Hudson, S. E. 2009. Where to Locate Wearable Displays? Reaction Time Performance of Visual Alerts from Tip to Toe. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI ’09. ACM, New York, NY, 941-944.
- Diamant, E. I., Lim, B. Y., Echenique, A., Leshed, G., and Fussell, S. R. 2009. Supporting intercultural collaboration with dynamic feedback systems: preliminary evidence from a creative design task. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ’09. ACM, New York, NY, 3997-4002.
2008
- Lim, B. Y., Shick, A., Harrison, C. 2008. Personal-Public Displays: Motivating Behavior Change through Ambient Information and Social Pressure. ACM CHI 2008 Workshop on Ambient Persuasion.
GET IN TOUCH
Contact Us
DEPARTMENT OF COMPUTER SCIENCE
13 Computing Drive, Singapore 117417













