Thank you to everyone who attended the Introduction to WildObs webinar on 5 March 2026. Additional questions the team did not have the opportunity to respond to live are answered here. These responses will be incorporated into our more general Frequently asked questions or other support articles in the future.
TABLE OF CONTENTS
- How is the WildObs platform different from the existing Wildlife Insights platform?
- For those with lots of data already in Wildlife Insights are you encouraging import to WildObs also?
- Will WildObs support accession of and public access to video snippet data?
- When will the camera deployment metadata app be available?
- When uploading images, could an option exist (if not already) to change filenames to be unique?
- Is there any interactions with the ALA Digi Vol?
- Can you provide some benchmarks or expectation for processing speed?
- How often are you hoping to push data across to the Atlas of Living Australia database?
- Is there a cost to this platform to the user?
- What levels of sharing data are available at the moment?
- Is it possible to develop a new WildObs model to identify wildlife in other areas, or is the platform limited to Australian species only?
- Vertically oriented cameras
- Can images be imported via FTP from 4G enabled cameras?
- Are you or will you collaborate with eVorta?
- Threatened species obscuring
- Acoustic monitoring
- Occupancy modelling with EcoCommons
How is the WildObs platform different from the existing Wildlife Insights platform?
The Wildlife Insights platform hosts computer vision models such as SpeciesNet and MegaDetector. These models have been trained primarily on global datasets and therefore do not yet have strong representation of Australian wildlife.
A good example is when camera trap images of a cassowary are uploaded to Wildlife Insights — the system may annotate it as an emu because the model has not been sufficiently trained on cassowary images yet.
WildObs are building computer vision models designed specially to detect and classify Australian wildlife by designing nation and site-specific CV models that you can select from our platform. We are also contributing our data to Wildlife Insights to help improve their models. At the same time, WildObs uses national infrastructure, and the data and capabilities are being developed and hosted in Australia with the support of our collaborators.
For those with lots of data already in Wildlife Insights are you encouraging import to WildObs also?
Absolutely! We have an existing strong pipeline to enable image/data transfers between WildObs and Wildlife Insights. We take a very collaborative approach here.
Will WildObs support accession of and public access to video snippet data?
Oberon Citizen Science Network in the NSW Central Tablelands has been developing and testing custom DIY long-wavelength infrared (thermal) video cameras for detection of platypus, rakali and arboreal mammals. Still images from these cameras are relatively low resolution (640x512 pixels) and are somewhat underwhelming, but video sequences at 25 or 50 frames per second from the cameras are fantastic and allow species identification based on morphology and also characteristic movement and behaviour. Typical video snippets for such observations range from 20 to 50 MB each. Will WildObs support accession of and public access to such video snippet data? Currently we use YouTube as a publicly accesible repository, as well as AWS (Amazon Web Services) S3 buckets, but we would prefer to use a publicly-funded national repository like WildObs. For an example, please see https://youtu.be/sYxzaZNwJPU
Our current system can accommodate video uploads, but currently we do not host any computer vision models tailored for classifying species (or more) from video data. However, we are looking to include additional capacity for handling video data from camera traps in our next development iteration. Your use case with infrared videos (great platypus video BTW!) is particularly unique, and would require training a computer vision model with thermal videos rather than 'regular' daytime or nighttime infrared camera trap imagery. If you have thermal videos that have had their species (or more info) classified and formatted as a Camtrap DP media file (https://camtrap-dp.tdwg.org/data/#media), this is the sort of information we could use as training data to develop such a computer vision model. If there is specific information you are hoping to extract from videos (e.g., behaviours, direction/speed of travel, species classifications) and would like to chat more, we can setup a meeting to learn more about your use case.
When will the camera deployment metadata app be available?
This is something we are still looking into. It will be a focus in our next iteration after June 1st.
When uploading images, could an option exist (if not already) to change filenames to be unique?
Data is uploaded to our platform as a deployment, which represents a collection of camera trap image files. Each deployment is assigned a unique deployment ID generated by the platform.
Is there any interactions with the ALA Digi Vol?
Not at this stage, we have had conversations but it was a bit out of scope for this phase. We will pick up the conversation again after launch on June 1st.
Can you provide some benchmarks or expectation for processing speed?
E.g. what is the estimated latency for large scale datasets (>100k images)?
We do not have this latency data yet and we will present it in our upcoming webinar focussed more on CV models.
How often are you hoping to push data across to the Atlas of Living Australia database?
Or is it on ALA to pull data from WildObs?
The dataset is first standardised through our processing pipeline. Once that process is completed, the standardised dataset can then be pushed to the ALA.
Is there a cost to this platform to the user?
The platform is free to use.
What levels of sharing data are available at the moment?
All or metadata only? Working with Ranger Groups, some more control on what and where data are shared would be great. e.g. an option to share with other Ranger Groups only, certain species or locations can/cannot be shared etc.
See Data sharing terms for data sharing options available.
In this specific instance, we would make sure that someone from the other ranger group is included as a collaborator with the specific project. Its easy to add collaborators to specific projects and assign them specific roles that can bypass some of the sharing hurdles. For example, the general public will only see metadata, but all collaborators will have full access.
Is it possible to develop a new WildObs model to identify wildlife in other areas, or is the platform limited to Australian species only?
Our current models are specifically trained to identify Australian species. However, if users want to use WildObs for wildlife in other regions, the platform also provides options to use other well-known models such as SpeciesNet and MegaDetector, which can support broader species detection.
Vertically oriented cameras
If we are using vertically oriented cameras (facing downwards), would your models be able to ID animals in these images or would we need to provide training datasets first?
WildObs is building models which are robust enough to handle the camera trap images acquired from different angles. We are building on the qualities of Google developed model, SpeciesNet, which has been designed to handle such corner cases along with when only a part of the animal is visible in the image.
Can images be imported via FTP from 4G enabled cameras?
At the moment, the platform does not support direct FTP imports from 4G enabled cameras. However, we are considering adding more image upload methods in the future that may support something like live-feed cameras. We are very interested to collaborate with researchers who are using live-feed cameras and would like to develop clear pipelines to enable better research from this data type. If you are using live-feed cameras, please reach out and we can help develop a pipeline to suit your needs.
Are you or will you collaborate with eVorta?
We are aware of eVorta and would certainly be open to collaborating with them in the future. Our wildlife image platform is designed to host multiple computer vision models that users will be able to select from. We would be more than happy to host eVorta's computer vision models to ensure as many people as possible have access to shared digital infrastructure to accelerate inferences generated from cameras.
Threatened species obscuring
It was mentioned that sensitive data such as threatened species are always obscured.
Does this mean the location coordinates are reduced in decimal points so that a point is still provided but with limited resolution/accuracy), i.e. within a 100km radius of this point? OR
Does it mean the database will tell people records of species x occur in this database/specific project but not provide an coordinates or mapped points, triggering users/viewers to seek permission? OR
Both or one or the other in different circumstances?
We have implemented a novel approach towards obscuring threatened species information from data accessible from the WildObs database. Your second point is closer to the reality; we enable taxonomic queries which can return projects that have detected threatened species, but any identifying information about the specific deployment, observation, or media for that record has been obscured. Therefore, researchers can gather information to learn about projects that may have data on their threatened species of interest without publicly exposing when and where that species was detected. If the user wants unobscured access to those records, the user can submit a data sharing request (facilitated via WildObs) to access an unobscured version of the data. WildObs aims to align with the newly developed RSAD sensitive species lists (https://www.rasd.org.au/) to ensure seamless alignment with ALA's threatened species reporting policies.
Acoustic monitoring
WildObs is not directly handling acoustic data, but our co-funded partners at open EcoAcoustics (https://openecoacoustics.org/) are doing a great job and should be able to assist with all acoustics related queries.
Occupancy modelling with EcoCommons
Does using EcoCommons or the WildObs R package require linking to WildObs database or using exports from it? OR
Can these facilities be used with people’s currently privately held data and input sheets (which would have project specific covariates unlikely to be included in the WildObs database)?
The EcoCommons tutorial (Single-Species Single-Season Occupancy Model - Practical Notebook) uses data collected from the WildObs database via the WildObsR R package. However, the tools in the WildObsR R package and EcoCommons tutorial can broadly be applied to any dataset. However, the key distinction is that the tools in the tutorial and R package are designed to work with the data standard Camtrap DP (https://camtrap-dp.tdwg.org), and may struggle to work with data using different standards. For someone who is savvy with data and can transform their private data to match the Camtrap DP standard, the tools and workflow described in the tutorial should work very well.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article