Log in

Canberra Branch Meeting -- Statistically Significant Or Signifying Statistics?

  • 16 Jun 2023 9:13 PM
    Message # 13216052
    Francis Hui (Administrator)

    SSA Canberra invites you to its June branch meeting of 2023, which will feature Carolyn Huston from CSIRO present online on the topic of "Statistically Significant or Signifying Statistics?"

    Time: 27th June, commencing at 5:45pm and finish by 7:00pm Canberra time.

    Venue: Zoom. Please see the bottom of this event page for Zoom links. 

    Dinner: After the talk we will be holding a dinner at 7.15pm at The Kathmandu Momo House Nepalese and Indian Cuisines, 24 West Row, Canberra.

    If you are interested in attending the dinner, please let us know by 6pm Monday 26 June by entering your details at SSA Canberra Branch dinner attendance sheet or contacting Warren Muller ( Please regard this as a firm commitment, not just an intention. For withdrawals after the deadline, please remove your name from the sheet and phone or text Warren.

    NOTE: We are offering discounts to SSA early career and student members who attend dinner! For this meeting, dinners will be a fixed charge of $10 for student members and $20 for early career members. 

    Talk details

    Speaker: Dr. Carolyn Huston, CSIRO Data61, Melbourne

    Topic: Statistically Significant or Signifying Statistics?


    There is a gap that needs to be filled that exists between explainable statistical (and machine learning) models, and what is needed for typical consumers and citizens to feel comfortable and confident applying, or allowing statistic and machine learning models to be integrated into systems in our everyday life. Everyday examples include confident and safe use of generative models such as chatGPT, or even data sharing information from Smart Meters with electricity companies.

    Statistics as a discipline has often developed in a close relationship to real-world problems, solving them and increasing our understanding of underlying processes driving the data. Think Fisher’s relationship to agriculture field experiments, or Gossett and the t-test evolving to support the brewing of better beer. In addition to improving outcomes and allowing prediction, many statistical methods allow inference into the science or other phenomenon being studied. Conversely, machine-learning, while sometimes using statistical methods, has had more of a focus on predictive accuracy than explanation in its development. As data analytic methods support solving complex problems like net-zero, there has also been increasing demand for explanations of why ML models work, or for ML methods that support better understanding of the phenomenon being predicted. Statistics with its strong inferences component has a long history in this type of application, and increasingly so too does the sub-discipline of explainable machine learning, (similar in spirit to aspects of statistical practice but often having different logical and philosophical approaches).  

    Both statistics and ML methods form the basis of many modern artificial intelligence (AI) systems. These systems are increasingly permeating our everyday life and work, and are increasingly being scrutinised and investigated to try to ensure that their social license is legitimate, and to understand how to regulate them effectively. Consider the relationships between statistics/explainable ML and ideas in Australia’s Artificial Intelligence Framework including fairness, privacy protection, transparency and explainability, contestability of outcomes and similar. It would seem that statistical models and/or explainable machine learning approaches are the solution to implementing ethical AI implementations – but are they? In a recent (overheard) conversation between social scientists and data scientists, the response to the data scientist offering to develop an explainable model was “Not one more explainable model, that’s not what we need!” Perhaps certain aspects of explainable models are best explained and understood by modelling practitioners, but are not practically explainable to the everyday public. Is full explainability even what is needed to build trust or have systems that align with ethical implementation of complex models in AI? There is a knowledge and solution gap in this space.

    As a partial and proposed solution to this quandary, I want to introduce the audience to ideas from design thinking, such as affordances, signifiers, and human centred approaches to thinking about how models can, are, and should be used and communicated with end-users and society at large. Hopefully these ideas stimulate some interest and robust discussions with the audience on paths forward!

    Biography: Please see 


    Topic: SSA Canberra branch meeting
    Time: Jun 27, 2023 05:45 PM Canberra, Melbourne, Sydney

    Join Zoom Meeting

    Meeting ID: 897 6294 2082
    Password: 916526
    One tap mobile
    +61861193900,,89762942082#,,,,0#,,916526# Australia
    +61871501149,,89762942082#,,,,0#,,916526# Australia

    Dial by your location
            +61 8 6119 3900 Australia
            +61 8 7150 1149 Australia
            +61 2 8015 6011 Australia
            +61 3 7018 2005 Australia
            +61 7 3185 3730 Australia
    Meeting ID: 897 6294 2082
    Password: 916526
    Find your local number:

    Or an H.323/SIP room system:
        Dial: +61262227588 (AUCX)
        Meeting ID: 89762942082
        H323/SIP Password: 916526

    Join by Skype for Business


    Website links

Powered by Wild Apricot Membership Software