Editorial
1.Communicating uncertainty while preserving users’ trust
Recently, I had the opportunity to attend the 11th European Quality Conference (Q2024), co-organized by Statistics Portugal and Eurostat in Estoril from 4 to 7 June 2024. These conferences have always been a privileged moment for the European Statistical System to reflect on how institutional frameworks, new data sources, and innovative methods impact quality in official statistics.
The 2024 conference has seen the participation of over 500 statisticians from 65 countries. Under the overarching theme “The role of official statistics as a pillar of democracy”, the programme of Q2024 was articulated in 40 Regular Sessions, 10 speed talks, 3 plenary sessions and 2 poster sessions. In total 222 presentations were delivered.11
One of the key themes of the conference was “Communicating Quality”, with two sessions focusing on this topic, plus the Closing Plenary session dedicated to “Communicating official statistics in the present data ecosystem”. The speakers in the regular sessions and the five panellists in the closing session provided different perspectives on the topic.
On one side, the main problem highlighted was the fact that, in general, statisticians are bad communicators and, for this reason, they fail to convey key messages in a way that can be understood by everybody. A representative of the media community, in particular, stated that the real issue is not ‘communicating statistics’ quality’, but rather ‘communicating statistics with quality’, which means providing relevant real-time data, that are openly accessible and disseminated in a user-friendly way, explaining at the same time their limitations. The media play a crucial role in the dissemination of official statistics today, being an essential platform for supporting informed public discourse and, by extension, being one of the pillars of a healthy democratic system. The current media landscape, however, is increasingly fragmented and fast-paced, shifting towards shorter formats and greater use of video content. It is therefore becoming more and more important that National Statistical Offices (NSOs) adapt their communications strategies to ensure their statistics are accessible and engaging for the general public. A significant challenge in this regard is the translation of raw statistical data into compelling stories that capture public interest, as well as the use of an informal language that can be easily grasped by a broad audience, including the young generations. This task requires collaboration between statisticians and communications professionals, who together can craft narratives that are both accurate and engaging. In this context, simplifying and shortening content is crucial for media consumption, as journalists and the public often prefer quick, understandable information over complex technical details.
On the other side, one of the panellists argued that the biggest challenge for National Statistical Offices today is communicating uncertainty, particularly within a context of growing mistrust towards public institutions and deliberate efforts to undermine scientific credibility. Having to face an increasing societal demand for clear, definitive conclusions, most NSOs may be inclined to avoid communicating the inherent uncertainties in statistical data in order to maintain their credibility and public confidence in the data they disseminate.
The current growing mistrust towards public institutions is driven by the proliferation of misinformation and disinformation campaigns, particularly on social media, and by the increased polarization of media narratives. Often, disinformation campaigns target the credibility of public institutions and scientific research, promoting skepticism about the accuracy and objectivity of the data provided by NSOs and playing a considerable role in eroding public trust in official statistics. For example, during election periods or recently, in the context of the COVID crisis, misinformation can spread doubts about the validity of the official figures published by the NSO.
Moreover, the media landscape has become increasingly polarized, with some outlets perpetuating narratives that question the reliability of official statistics. This media environment can lead to selective reporting or misinterpretation of data, further exacerbating public mistrust. For instance, partisan media might highlight discrepancies or revisions in official statistics as evidence of manipulation or incompetence, rather than as part of the normal process of data refinement.
In some cases, this mistrust can have a historical justification, as statistics have sometimes been used in certain countries to justify government policies and decisions. This historical misuse has left a legacy of skepticism among the public, who may view current data releases through a lens of suspicion. Even when statistics are presented with the best intentions and highest standards of accuracy, the shadow of past misuse can undermine public confidence in the long term.
This mistrust is compounded by a societal preference for certainty,22 as the public and policymakers often expect definitive conclusions, even when the data inherently involve a degree of uncertainty. The general public and policymakers often prefer clear, straightforward data interpretations, which can conflict with the intrinsic complexities and uncertainties in statistical data. This demand for certainty is understandable, as clear data can simplify decision-making processes and avoid the perceived risks associated with ambiguity. However, it can lead to unrealistic expectations about the precision and reliability of statistical information. These expectations can lead to a reluctance by statistical agencies to fully disclose the extent of uncertainties in their data, fearing it may undermine their credibility and authority.
There is a common misconception among the general public that numbers are inherently precise and that statistical data can provide absolute truths. Uncertainty nonetheless is an unavoidable aspect of statistical estimation and prediction. NSOs should actively work to manage public and policymaker expectations regarding the certainty and precision of statistical data. This might involve highlighting the probabilistic nature of many statistics and the reasons why exact figures can be misleading without a proper understanding of the context and the accompanying uncertainty. Ignoring or not communicating this uncertainty can lead to misinterpretations of data, resulting in misguided decisions by policymakers and the public.33 A more transparent approach to communicating uncertainty is not only ethical but also essential for maintaining the credibility of statistical data.44
The interplay between public mistrust and the demand for certainty creates a challenging environment for NSOs. There is a delicate balance to strike between being transparent about the limitations and uncertainties inherent in statistical data and presenting information in a way that is accessible and trustworthy to the public. In the realm of official statistics, where data is used to inform critical policy decisions, the accurate representation of uncertainty is particularly crucial. However, conveying uncertainty effectively is challenging, especially when statistical literacy among the general public is inadequate, and requires careful consideration of various strategies. Engaging in field research to test how different methods of communicating uncertainty affect public understanding and trust is particularly useful. Experiments with real audiences can reveal whether certain approaches, such as specifying numerical ranges or providing access to underlying data, are effective in conveying uncertainty without overwhelming the audience.55
One of the most straightforward methods to communicate uncertainty is through numerical expressions, such as confidence intervals or ranges. This approach allows recipients to grasp the possible variability around a point estimate. Research shows that numerical communication of uncertainty does not significantly diminish trust in the data or the source. Participants exposed to numerical ranges generally perceived the data as more honest without a corresponding drop in trust levels.66 Therefore, numerical methods are recommended for audiences comfortable with statistical concepts.
Verbal communication of uncertainty, using terms such as “about” or “approximately,” can be more accessible to non-expert audiences. This approach avoids the complexity of numerical data but introduces challenges such as ambiguity and lack of precision. Studies indicate mixed results for the effectiveness of verbal communication. While some verbal hints do not significantly impact trust, more explicit statements about uncertainty can reduce perceived credibility.77
The choice between numerical and verbal communication should consider the audience’s statistical literacy. For general public communication, combining both methods might enhance understanding without compromising trust.
Incorporating contextual information and visual aids can significantly enhance the communication of uncertainty. Contextual details help the audience understand the sources and implications of uncertainty. Visual aids, such as charts and infographics, can illustrate ranges and distributions effectively, making the data more accessible and easier to comprehend.88 For example, using a bar graph to show the unemployment rate with error bars representing its range can visually communicate uncertainty in a straightforward manner. Interactive tools that allow users to explore different scenarios and outcomes based on varying assumptions can also be powerful.
Being transparent about the sources of data and their limitations is crucial for building and sustaining trust. This involves explaining the origins of the data, the methodology used, and any potential biases or errors. Transparency allows the audience to better assess the reliability of the information. Being upfront about uncertainties does not generally erode trust as long as the information is communicated clearly and honestly. It is also essential to avoid overloading the audience with technical jargon or complex expressions while ensuring that the uncertainty is neither understated nor overstated.99
Different audiences require tailored communication strategies. National Statistical Offices (NSOs) face the critical task of communicating complex statistical data to a diverse audience, which includes policymakers, academics, the general public, and the media. Each of these groups has varying levels of statistical literacy, different needs for data interpretation, and distinct ways of using statistical information. Therefore, a one-size-fits-all approach to communicating statistics, particularly regarding uncertainty, is inadequate. Tailored communication strategies are essential for effectively conveying the nuances of statistical data, ensuring that each audience receives information in a manner that is understandable and relevant to them.
Active engagement with various stakeholders, which can take the form of public forums, workshops, or online platforms where statisticians and data users can interact, discuss, and clarify uncertainties and interpretations of data, is also crucial. Such interactions, which can involve various forms of statistical literacy, can help ensure that the data is used appropriately and that its limitations are understood.1010
Communicating uncertainty in official statistics is a complex but essential task for NSOs. In an era marked by mistrust in public institutions and the widespread dissemination of misinformation, the stakes are high. NSOs must strive for greater transparency, standardization, clarity and accessibility in their reporting, while also engaging in public education and dialogue. The challenges are significant, but the potential benefits of effectively communicating uncertainty are substantial. Improved public trust, better-informed policy decisions, and a more statistically literate population are all attainable outcomes if NSOs commit to clear and transparent communication practices. Ultimately, the goal is not just to find a better way to disseminate data, but to foster an informed and engaged public that can critically evaluate statistical information. This requires a collaborative effort to build a culture of transparency, openness, and continuous learning. By adopting these strategies, statisticians and communicators can ensure that their audiences are well-informed and capable of making decisions based on a realistic understanding of the data and its inherent uncertainties.
2.The content of this issue
2.1Interview with Paul Schreyer
This issue of the SJIAOS starts with an interview with Paul Schreyer, carried out by Jean-Pierre Cling, our new interview editor. Paul Schreyer, until very recently the Chief Statistician and Director of Statistics and Data at the Organization for Economic Cooperation and Development (OECD), talks about the governance mechanisms that allow OECD to identify current and future demands for statistics from policy makers and to remain at the forefront of the development of innovative statistical frameworks and standards. In this regard, the collaboration with other international organizations for the advancement of new statistical frameworks, such as the 2025 revision of the SNA, is crucial. He also describes the internal data governance arrangements that ensure a strong coordination of the decentralized statistical system of the organization.
2.2Special section: Understanding and assessing the value of official statistics
This section of the Journal, organized by Fiona Willis-Núñez and Angela Potter as Guest Editors, comprises five papers prompted partly by the outcomes of Conference of European Statisticians task force on this topic active from 2018 to 2022, and partly by voluntary contributions responding to the call for papers issued by the Journal in 2023. In the dedicated section of the Journal, Fiona and Angela provide a guest editorial and an introduction to the five papers. They also contribute the first introductory paper on “Understanding the Value of Official Statistics” in which they argue that before we can quantify ‘the value of official statistics’ we need to understand what this really means. This implies that national statistical offices should first define their core objectives as providers of a public good, and then develop strategies to achieve these objectives. According to them, future international efforts should concentrate on defining frameworks to better understand the connections between goals and value indicators, sharing experiences of initiatives aimed at proving and enhancing the value of official statistics, and developing a core set of measures using the methods detailed in the article. In “Diverse ideas about the value of Official Statistics systems” Ken Roy offers an initial overview of different concepts about the value of Official Statistics systems derived from a sample of formal corporate documents produced by National Statistical Offices that collectively could inform a wider framework to communicate the potential impact of statistics systems on societal outcomes. In “Statistics for the public good: What it means and why it matters” Sofi Nickson discusses emerging thoughts around what it may mean for statistics to serve the public good, and how this aligns with customer-centric views on value. In “Is there a quantitative relationship between Democracy and Official Statistics?” Luca di Gennaro starts off from the strong empirical correlation found between indicators of statistical capacity and indicators of the proper functioning of a democratic system to state that statistical information ‘trustable and available to all’ is one of the foundations on which modern democratic states are built. In “Rethinking official statistics: a sociological perspective” Arman Bidarbakht-Nia provides a sociological framework to describe the interaction between statistical outputs and the process of constructing social realities within a broader understanding of the social functions of official statistics in which people and non-state institutions play a key role for co-creation and cognitive input into data governance structures.
2.3Innovative statistical methods
In the first article of this section, “Integrating Word Embedding and Topic Modeling for Sentiment Analysis: A Case Study on the Social Mood on Economy”, Elena Catanese, Mauro Bruno and Massimo de Cubellis (all from Istat) discuss the use of textual analysis and embedding spaces for sentiment analysis, focusing on a tool called “wordembox”, developed by Istat, the Italian National Institute of Statistics. This tool enhances word embedding algorithms like Word2Vec with graph functionality to discover word clusters and enable implicit topic modelling. It was applied to analyse Social Mood on Economy (SME) tweets during the early 2022 Russia-Ukraine conflict. The study compared wordembox with traditional topic modeling methods (e.g., Bayesian Latent Dirichlet Allocation) and newer techniques (e.g., Top2Vec, BERTopic). Results were consistent across methods, showing that their combined use enhances interpretability and insights. The article highlights the need for national statistical offices to invest for validating these approaches, especially for analysing unstructured data like tweets, to improve analytical results. Currently Istat is applying topic modelling techniques to study opinions on immigrations and gender-based violence through Twitter.
The second article is “A Web-Intelligence information system to support the production of EuroGroups Register (EGR) statistics” by Alexandros Bitoulas (Sogeti), Antonio Laureti Palma (ISTAT), Alexandre Depire, Fernando Reis, Pau Gaya Riera and Ioannis Sopranidis (all from Eurostat). The study addresses the need to better understand the impact of globalisation on official statistics, focusing on the EuroGroups Register (EGR), a project carried out in collaboration by EU Member States, EFTA countries, and Eurostat to accurately represent multinational enterprise groups. The objective is to improve the EGR’s accuracy and completeness by extracting and verifying information from the web, including from Wikipedia, and presents a methodology for assessing the quality of this information against official EGR data. Findings indicate that online public sources can supplement EGR data, particularly for attributes like the country of the global decision centre, turnover, and total assets. However, their contribution must be qualified and accurately validated. For example, employment data from public sources show moderate gains and require further analysis, especially for non-EU countries, to address coverage gaps.
The third article is “New skills in Symbolic Data Analysis for Official Statistics” prepared by a pool of academics from different universities (Rosanna Verde, from Universita degli Studi della Campania; Vladimir Batagelj, from University of Primorska; Paula Brito, from Universidade do Porto; Pedro Duarte Silva, from Universidade Catolica Portuguesa; Simona Korenjak-Cerne, from University of Ljubljana; Jasminka Dobsa, from University of Zagreb; Edwin Diday, from University of Paris). The article draws attention to the use of Symbolic Data Analysis (SDA) in official statistics, showcasing three pilot techniques. These include a new aggregation method using unified summaries for creating symbolic objects, a model-based approach for interval data applied to the Portuguese Labour Force Survey, and similarity measures between classes based on category frequencies. The paper demonstrates the effectiveness of these SDA methods for their ability to handle complex, aggregated data typical in official statistics. The findings also show SDA’s potential in extending statistical techniques to Big Data and enhancing data analysis and composite indicator construction. The collaboration with researchers from national statistical offices is recommended to leverage SDA’s explanatory power, especially compared to modern Machine Learning techniques.
In “SAE for binary variables with big data – A comparison of calibrated nearest neighbour and hierarchical Bayes methods of estimation” by Siu-Ming Tam (Tam Data Advisory Pty Ltd) a novel machine learning technique is introduced to obtain precise estimates in small geographical areas leveraging big data sources. This method, that is the calibrated K nearest neighbours (CKNN), combines hybrid estimation with imputed values and calibrates the collective sum of small area estimates to an independent national total. Evaluated using simulated data from the 2016 Australian population census, CKNN outperformed the Fay-Harriot method based on area-level covariates and proved also to be superior than the Battese-Harter-Fuller (BHF) method with unit-level covariates. This CKNN’s advantage, however, diminishes with hybrid estimation applied to the BHF method, showing a trade-off between precision and accuracy. These findings highlight the importance of hybrid estimation for small area estimation (SAE). Several assumptions are crucial for these methods: target variables must be observed without measurement errors in the big data set, there should be no over-coverage errors, the donor set must be sufficiently large, and covariates should be available for the entire population. For SAE, the article recommends the use of the CKNN method due to its efficient parameter determination and variable-agnostic nature. Unlike the BHF method, CKNN does not require constructing variable-specific linking models, making it easier for national statistical offices to generate SAEs across various target variables. Additionally, CKNN ensures internal consistency in imputed data across diverse variables, facilitating secondary analysis by researchers without further statistical processing.
In “Enhancing taxonomy-based extraction: leveraging information from online community platforms for digital skills demand identification in job ads”, Joanna Napierala (Cedefop) explores the impact of technological changes on job searching and skill requirements, highlighting online job advertisements (OJAs) as a crucial source for analysing labour market demand, especially for digital skills. The study introduces an experimental method for updating digital skill classification using natural language processing techniques to improve information extraction from OJAs. This method effectively identified programming terms but struggled with rapidly evolving AI terminology. Digital skills are increasingly vital in the workplace, making it essential for labour market analysts to understand the required professional competencies. The systems of information extraction from OJAs unstructured text based on classification or taxonomy, such as the ESCO-driven approach shown in this article, tend to be more efficient in processing time and economic aspects, as they do not require costly post-validation of obtained information by experts. Yet, the main challenge of this approach is that the classifications or taxonomies, e.g., digital skills, become obsolete rather rapidly. Exploring non-standard approaches to help keep the classifications up to date with emerging technologies that require new skills is necessary to make classification-based information extraction from OJAs relevant.
In “Indonesian GDP Movement Detection Using Online News Classification” a team from Politeknik Statistika STIS, Jakarta (Lya Hulliyyatus Suadaa, Dinda Pusparahmi Sholawatunnisa, Setia Praman and Usep Nugraha) explores the use of online economic news for real-time monitoring of Indonesian GDP movements and growth rates through advanced classification models. By employing web scraping techniques for data gathering, the research applies transfer learning with pre-trained language model transformers and compares their effectiveness to traditional machine learning algorithms. The results demonstrate that the pre-trained language models significantly outperform the machine learning models, achieving accuracies of up to 88.8%. The findings reveal that online news is a reliable alternative data source for early detection of GDP changes. Additionally, the research confirms that transfer learning models, particularly IndoBERT-Large and IndoBERT-Base, deliver superior performance compared to machine learning models in terms of accuracy, precision, recall, and f1-score for detecting GDP movements and growth rates. The study intentionally retained the natural distribution of labels in the dataset to better represent real-world economic conditions, avoiding data manipulation techniques like augmentation or resampling. Future work will focus on refining GDP growth rate classification by integrating sentence selection methods based on key features to further enhance model performance.
2.4Miscellaneous papers
In this section, the diverse nature of the studies reflects the richness and scope intrinsic to the methodological research conducted in statistical and academic institutions. The focus of the papers ranges from assessing the quality of household death reporting in censuses and surveys, to evaluating the quality of a post-enumeration survey in Ethiopia; from examining the opportunities offered by micro-data exchange for improving the accuracy of intra-EU import statistics, to understanding energy efficiency to craft sustainable energy policies within the EU; from reviewing the past and future history of GDP, to developing a taxonomy for Business-to-Government data sharing; from examining the role that private sector companies can play in filling data gaps in official statistics, to discussing the change management process needed to ensure an effective transition to Trusted Smart Statistics. These diverse papers, either proposing new tools and methods, or applying them to a variety of use-cases or domains, offer valuable insights that can significantly contribute to the advancement of official statistics worldwide.
The first paper in this section is
The second article is “Towards the 4th Population Census in Ethiopia: some insights into the feasibility of the Post-Enumeration Survey” by Giancarlo Carbonetti, Paolo Giacomi, Filomena Grassia and Alessandra Nuccitelli (all from ISTAT). This paper examines the feasibility of conducting a Post-Enumeration Survey in Ethiopia, where reliable population data is crucial but often lacking due to underdeveloped registration systems. In countries where population registers are not well-established, the population and housing census remains the primary source of detailed demographic data, living conditions, and other key socio-economic characteristics. The quality of the census findings is therefore crucial for several reasons, and conducting a Post- Enumeration Survey (PES) appears to be the only feasible way to evaluate the census results. This study reports, in particular, the results of a series of pilot surveys conducted as part of a cooperation project aimed to support the Ethiopian Central Statistical Agency (CSA) in preparing for the 4th Ethiopian Population and Housing Census. The paper describes the key issues identified in these pilot surveys and provides insights into the practical challenges of implementing a PES in Ethiopia’s unique national context. Several critical issues related to survey design and execution were identified. With regard to sampling, a three-stage stratified scheme was recommended to address Ethiopia’s diverse geographic characteristics. As for the data collection, Computer-Assisted Personal Interviewing was generally employed, while Paper-and-Pencil Interviewing was used only in areas with limited internet access. Although improvements such as the introduction of barcode stickers and enhanced data transfer procedures led to better data accuracy, challenges remained. The study also addresses the methodology for linking census and PES records, using both automated and manual processes. A prototype web application was developed to aid in record matching, although further refinements are required. Pilot survey results highlighted serious issues in record matching, due partly to structural problems such as high mobility among pastoralists and partly to inadequate information for accurate matching. These factors indicate that substantial improvements are needed across all phases of the PES cycle, from sample design, to data collection, to data estimation. In conclusion, the study underscores the need for ongoing improvements to both survey practices and the broader data infrastructure to enhance the reliability of mortality and demographic statistics in Ethiopia. A well-functioning Civil Registration and Vital Statistics system is essential for unique individual identification and for producing accurate and regular population data.
In the third article of this section, “Selective Editing for Asymmetry Analysis in Intra-EU Trade Micro-Data-Exchange (MDE)”, a team of from ISTAT (Francesco Ortame, Mauro Bruno, Maria Serena Causo, Giulio Massacci, Giuseppina Ruocco, and Simona Toti) examines the opportunities for improving the accuracy of intra-EU import statistics prompted by the EU Regulation 2019/2152 that has mandated the sharing of microdata on intra-EU exports. The establishment of the Micro-Data Exchange (MDE) provides national statistical institutes with a new data source that allows reducing the response burden on data providers and enhancing data quality. To address the challenge of ensuring consistency and comparability between MDE and national import data, Istat has developed a pioneering application using a Shiny package in R. This tool facilitates exploratory data analysis, systematic error detection, and selective editing. By applying user-defined thresholds to assess relative contributions and asymmetry suspicion indices, the tool identifies key data discrepancies effectively. The integration of this open tool within the European Statistical System promotes greater interoperability, method harmonization, and adherence to official statistical standards. Key advantages of this tool include its environment agnosticism – allowing it to function across various platforms without requiring local installations – its seamless integration with existing R code, and its contribution to enhanced standardization in statistical practices. The tool’s end-to-end framework combines both back-end and front-end functionalities, offering a robust solution for detecting data asymmetries with minimal coding expertise. Its automated features for outlier detection and adherence to official statistical standards mark a significant improvement over traditional methods. Positive feedback from domain experts indicates its potential for effective integration into statistical production environments and its effectiveness in refining intra-EU trade statistics. The source code and sample data for this application are available in GitHub, highlighting the tool’s open-source nature and its adaptability for diverse statistical analysis needs.
In “Understanding energy efficiency is crucial for crafting effective and sustainable energy policies within the European Union (EU)” Stavros Lazarou, Sandrine Herbeth, Loic Coent and Madeleine Mahovsky (all from Eurostat) present a detailed decomposition analysis of official EU energy statistics, that aims to differentiate between genuine improvements in energy efficiency and external factors affecting energy consumption in various industries. The study utilizes an adapted Logarithmic Mean Divisia Index (LMDI) approach, which allows for a nuanced assessment of energy efficiency across various industries in the EU-27. The article begins by explaining the decomposition analysis methodology, highlighting how the LMDI method was tailored to analyse EU energy data. The analysis covers several industries, including manufacturing, construction, residential, and transport. For each sector, the study investigates factors influencing energy consumption and addresses challenges encountered in data collection. By applying this method, the research provides valuable insights into how different industries contribute to overall energy consumption and efficiency within the EU. The research also explores energy intensity comparisons among EU-27 countries, providing a perspective that is independent of national economic sizes. This comparison aids in evaluating potential energy efficiency gains and highlights data quality issues. The study concludes that while decomposition analysis offers valuable insights, its utility is limited by data quality constraints. Future research could benefit from integrating energy consumption data with greenhouse gas emissions, using frameworks like the System of Environmental-Economic Accounting. Such integration could enhance understanding of the complex dynamics between economic activities and environmental impacts, leading to more informed and effective energy policies.
In “To GDP and Beyond: the past and future history of the world’s most powerful statistical indicator” Stephen MacFeely (WHO), Peter van de Ven and Anu Peltota (UNCTAD) review the origins and evolution of the System of National Accounts (SNA) and the Gross Domestic Product (GDP), discussing their adaptation to the changes in the economy and to the key criticisms they have faced since their inception. According to the current debate, GDP’s shortcomings fall into three main areas: a) measurement problems within the existing framework, arising from changes in the economy and society – most notably globalization and digitalization; b) limits of the SNA framework itself and its ability to measure well-being and sustainability (the catchall “Beyond GDP” debate); and c) the promotion of a ‘growth-at-all-costs’ ideology which works against environmental and social reforms. This paper discusses whether it is possible to address at least some aspects of these issues within the SNA, either in the ‘core’ sequence of economic accounts, or through a broadened set of accounts or if new approaches are necessary. The paper concludes with an overview of the 2025 SNA update and new work beginning at the UN to encourage member states to move beyond GDP. Efforts to move beyond GDP involve rethinking progress to embrace sustainability relational well-being and inclusivity. Ultimately, transitioning away from GDP is both a technical and political challenge, requiring global and national commitment to new measures of progress and development.
The sixth article of this section is “Towards a Taxonomy for Business-to-Government Data Sharing” by Serena Signorelli, Matteo Fontana, Michele Vespe, Lorenzo Gabrielli, and Eleonora Bertoni (all from the European Commission Joint Research Centre) Business-to-Government (B2G) data sharing has gained prominence in recent years, driven by the recognition of the significant contribution that privately-held data could provide in better understanding societal issues across various contexts. Each context demands different quality levels for the data, which highlights the need for a nuanced approach to data sharing. The objective of this work is to develop a comprehensive taxonomy for B2G data sharing initiatives. This involves categorizing the different scenarios where B2G data sharing occurs and identifying key attributes and quality principles relevant to each context. By creating this taxonomy, we aim to clarify the specificities and requirements of B2G data sharing, facilitating more effective and dynamic data flows. The paper introduces a preliminary proposal for a B2G data sharing taxonomy, which is expected to be refined and expanded with new insights. This will be the basis to help identify commonalities in quality principles required for different data sharing contexts. Moreover, the paper provides foundational information that should accompany any B2G data sharing initiative, setting the groundwork for future inventories of such initiatives. The goal is to highlight the diversity within B2G data sharing settings, which are often treated as a monolithic entity but actually encompass a wide range of objectives, contexts, and requirements. By offering insights into these differences, the work aims to improve the understanding and management of B2G data sharing.
The seventh article of this section is “Bridging the gap: Gallup’s role supporting the official statistics ecosystem” by Andrew Rzepa, Benedict Vigers, Kiki Papachristoforou (from Gallup, Inc.) with Stephen Crabtree (from Gartner, Inc.). In a rapidly evolving global data landscape, placing new demands on official statistics, partnerships between NSOs, international organizations, and private sector companies can help address the need to collect frequent, high-quality data on a growing array of indicators. Using Gallup, a global research and analytics firm, as an example, this paper demonstrates the value that private sector organizations can bring to the realm of official statistics. By adhering to rigorous statistical standards and principles of transparency and respondent confidentiality, private entities can fill critical data gaps and support accountability on global issues, such as SDG monitoring. Gallup’s ability to provide high-frequency data and adhere to ethical standards illustrates the role that the private sector can play in complementing official statistics efforts. This role includes developing survey instruments, validating measurement frameworks, and optimizing data collection methods, all while ensuring accuracy and reliability. This collaboration strengthens the global data ecosystem and contributes to sustainable and effective statistics.
In “Organizational sustainability to support Trusted Smart Statistics: Istat’s experience” Gerarda Grippo and Massimo De Cubellis (both from Istat) discuss the comprehensive change management process across all organizational dimensions undertaken at Istat to ensure an effective transition to Trusted Smart Statistics (TSS). This innovation requires integrating new processes and new data sources with traditional methods, while upholding standards of relevance, quality, and trust. This paper examines Istat’s approach to adapting to these changes, highlighting the implementation of organizational solutions, investments in research and innovation and associated benefits. Istat, a leader in big data experimentation in Europe, has developed a modernization program and established the TSS Centre to guide technological, methodological, legal, and human resources adaptations. The lessons learned from this process stress that digital transformation requires updating business models through interdisciplinary collaboration so as to maintain and enhance trust in official statistics. Istat’s strategic decisions, such as the centralization of methodological skills and the creation of research infrastructures, have enabled the TSS Centre to manage organizational complexity, improve efficiency, and drive innovation while balancing routine production needs with new initiatives. Istat’s efforts have consolidated experiments, improved statistical information, and fostered new collaborations, enhancing the organization’s position and sustaining public trust in official statistics.
3.The SJIAOS discussion platform put on hold
With the release of this issue of the Journal (September 2024, Vol 40.3), the SJIAOS discussion platform published on the SJIAOS website (https://officialstatistics.com/discussion-platform) will be put on hold. Since September 2019 (Volume 35.3), every three months a novel discussion has been initiated with the release of a new Journal issue. The discussions were often launched in parallel with the publication of a specific (set of) manuscript on the same topic or, even, a whole issue of the Journal. These manuscripts would provide the background for the discussion. Moreover, in the 5-year period, three additional discussions were launched independently. Each discussion would run for a year and in most cases be closed with a concluding commentary by the author(s) of the topical article. Overall, the discussions constitute an extensive overview of important topics for official statistics: many of them are of significant general relevance, while some discussions are more connected to hot topics within the global statistics community at that specific moment of the publication.
The idea behind the establishment of the discussion platform was to offer an opportunity for anyone working or interested in official statistics, to contribute to topical discussions, at their convenience. In particular, the discussion platform was meant to fulfil three objectives:
1) To attract special attention to relevant topics in official statistics. The publication of the discussions on the Journal’s website attracted extra attention to the topic covered in the specific background manuscripts. The manuscripts related to the discussions were always published as free access.
2) To trigger comments from the readers via posts in the blog as part of the discussion. This happened for about half of the discussions.
3) To channel website traffic to the Journal, the IAOS and its activities in general. Based on the sharp increase in website visits, clicks on the journal, and article downloads, from 2019 onwards this initiative has worked very well.
After a few successful years, however, the parti-cipation in the blog measured by the number of written comments on the discussion statements has gone substantially down. Beyond that, most of the topics of general interest that can attract a wider attention have more or less all been covered in these five years. Last, but not least, the dedicated Journal’s website will be closed at the end of the year and will be replaced by a new webpage as part of the website of the new publisher, together with the over 1,100 journals published by Sage. For these reasons, it has been decided to put the discussion platform on hold and reflect on possible alternatives for the future. Of course, the 20 discussions, their statements, and background information will still be available via the IAOS website (https://iaos-isi.org/statistical-journal/), but commenting will no longer be possible.
Pietro Gennari
Editor-in-Chief
August 2024
Statistical Journal of the IAOS
E-mail: [email protected]
Notes
1 The entire programme of Q2024, including the PowerPoint presentations and papers, is available at: https://www.q2024.pt/program me/sessions.
2 Manski, C. F. Communicating Uncertainty in Official Economic Statistics: An Appraisal Fifty Years after Morgenstern. Journal of Economic Literature, 2015, 53(3), 631–653. http://dx.doi.org/10.1257/ jel.53.3.631.
3 Lynn, P. Editorial: Measuring and communicating survey quality. J. R. Statist. Soc. A (2004) 167, Part 4, pp. 575–578.
4 Manski, C. F. cit.
5 Kerr J., van der Bles A-M., Dryhurst S., Schneider C.R., Chopurian V., Freeman A.J., van der Linden S. The effects of communicating uncertainty around statistics, on public trust. R Soc Open Sci. 2023 Nov 22;10(11):230604. doi: 10.1098/rsos.230604.
6 Office for Statistics Regulation (internet). Approaches to presenting uncertainty in the statistical system. September 2022. Available at: https://osr.statisticsauthority.gov.uk/wp-content/uploads/2022/09/Approaches_to_presenting_uncertainty_in_the_statistical_system.pdf. ONS Economic Statistics Centre of Excellence (internet). Modelling and communicating data uncertainty. Available at: https://www.escoe.ac.uk/projects/modelling-and-communicating-data-uncertainty/.
7 Kerr J. et al. cit.
8 Kerr J. et al. cit.
9 Full Fact (internet). How to communicate uncertainty. Available at: https://fullfact.org/media/uploads/en-communicating-uncertainty.pdf.
10 Lachenbruch P. A. Communicating Statistics and Developing Professionals: The 2008 ASA Presidential Address. Journal of the American Statistical Association, 2009, 104: 485, 1-4, DOI: 10.1198/jasa.2009.0033.