search instagram arrow-down


Panel 1: “Agriculture Empowered by Supercomputing”

Monday 13 February Day 1

  • Agriculture: from the less digitised industry to next-gen computing applications.
  • What for? How?
  • Open source software and hardware.
  • Reliable and fault tolerant peer-to-peer platforms.
  • How many ecosystems? (e.g., etc).


In the USA, farm output contributes about 0.7% of GDP and the full contribution from agriculture, food, and related industries is about 5.4% of GDP.  For purposes of comparison, the IT industry contributes more than 10% of GDP. 

Might we thus find ourselves with digital twins, data centres everywhere, and AI always, but not enough food?

Let’s reverse this: if the IT industry is so “powerful” then couldn’t it be the “salvation” for the agriculture and related sectors?

What about if that 5.4% of GDP is a consequence of massive inefficiencies through the whole value chain and supply chain (35%-50% of food harvested/produced never reaches the consumer, unbalanced prices and costs between farm gate and consumer, etc.)?

 Would that be an hypothesis of interest?

Instead of increasing production, what about optimisation, efficiency, input reduction, etc? Enter consequences for the climate, too.

And where are we -this community, going to help?

Panel 2: “Modelling, Simulations and Digital Twins”

Tuesday 14 February Day 2

  • Digital Distributed Manufacturing.
  • Open Platforms vs {meta, omni,…} verses. AR/VR.
  • Supply chain challenges, scalability, interdependency.
  • Business and industry challenges.
  • Software and Systems for the Enterprise in a Complex World.

Some quotes:

According to eminent statistician George Box, “All models are wrong, but some are useful”.

“The only difference between theory and practice is that in theory they are the same.” 

Perhaps the same is true of simulation and reality.

Since the 1960’s the potentials of software modelling, especially system dynamics modelling and simulation – and ‘digital twins’ – have attracted attention.  Arguably, they could be viewed as specious solutions searching for soluble problems.

The real world is filled with wicked problems – and super-wicked problems.

What are the constraints on algorithmic and non-algorithmic computer-based modelling and simulation of real world challenges and opportunities?

Panel 3: “Exascale to the Edge”

Wednesday 15 February Day 3

  • Distributed Heterogeneous Computing.
  • Small – Cheap – Fast – Secure – Scalable.
  • Trusted computing.
  • Network challenges.
  • Where’s my data? When every device becomes a Data-Centre.

Thanks to advances in hardware processing power, in terms of both energy and hardware costs, it can now be relatively cheaper to process data near its source rather than transport it to a central processing facility.

Meanwhile, network management paradigms continue to evolve towards automated, relatively light human touch (and therefore lower cost and more reliable) architectures.

What are the likely future trajectories of the trade-offs between distributed data processing with lower transport requirements and costs; and centralised data processing with relatively higher transport requirements and costs?

How might these potential scenarios impact New Zealand as a relatively remote location, yet with an abundance of renewable energy sources?

What can we do to get exascale results with petascale cost and environmental impact?

        a.      “Back to the 1970s” focus on software efficiency?

        b.      Algorithms research?

        c.      Continued hardware improvements?  Efficiency?
                Special-purpose hardware accelerators?

“Exascale to the edge” might suggest pushing the computational burden to smartphones and similar devices.  If so…

        a.      Will users accept corresponding reductions in battery

        b.      How to motivate deployment of needed special-purpose
                hardware accelerators to the client devices?

        c.      How to ensure that data-locality laws are also respected?
                (If one leaves a country carrying a smartphone containing data that must remain in that country, is that a
                violation, and if so, how is it to be enforced?  And how does one know that one’s smartphone contains
                such data?)

        d.      To what extent are the results of client-device computations trusted, given that such devices might well
                have been compromised?  (People really did attempt to spoof SETI@Home!)

What is the overall tradeoff between longer-lived hardware on the one hand and more-rapid deployment of improved hardware on the other?

What are the tradeoffs between the software efficiencies that can be more easily attained across a uniform compute base and th advantages of exploiting a larger group of non-uniform systems?

What is the failure model for computations spanning multiple client devices?  For example, SETI@Home used redundant computations.

Panel 4: “AI, of course!”

Thursday 16 February Day 4

  • Challenges in compute power, networks, HW, SW.
  • Generative AI (should I ask ChatGPT to write my abstract?)
  • Huge Graph Neural Networks.
  • Data sovereignty vs Colonisation.

Is AI really useful for you? How? Orders of magnitude?

Who are you depending on? Vendor? Legacy? How much?

What do you need/want next from AI?

What are you worried about?

Ethics? Cost? Inaccuracies? Inconsistencies? Black boxes? Black Swans?

%d bloggers like this: