TA-ANALYSIS#

Collected data from tests and monitoring of deployed software in eclipse-score/inc_nlohmann_json is analysed according to specified objectives.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-ANALYSIS-CHECKLIST.md

Checklist for TA-ANALYSIS from Codethink#

  • What fraction of Expectations are covered by the test data?

    Answer: The two expectations are JLEX-01 and JLEX-02. Every statement supporting both of the expectations is ultimately supported by a test, except for WFJ-06. For WFJ-06 it is impossible to provide a direct tests, since this is a statement on infinitely many cases. Indirect tests are provided by the rejection of ill-formed json data.

  • What fraction of Misbehaviours are covered by the monitored indicator data?

    Answer: For the intended use-case, no misbehaviours have been identified. Furthermore, no indicator data are collected.

  • How confident are we that the indicator data are accurate and timely?

    Answer: No indicator data are collected.

  • How reliable is the monitoring process?

    Answer: Due to no indicator data being collected, there is no monitoring process.

  • How well does the production data correlate with our test data?

    Answer: Due to the general nature of the library, there are no production data.

  • Are we publishing our data analysis?

    Answer: Since we have no production data with which to compare our not collected indicator data or our test data, no data analysis is done, which is not published.

  • Are we comparing and analysing production data vs test?

    Answer: There are no production data.

  • Are our results getting better, or worse?

    Answer: Neither.

  • Are we addressing spikes/regressions? Answer: There are no spikes in the non-existent indicator data. If a test ever fails, then the spike is investigated. The results of fuzz testing are investigated in the original nlohmann/json.

  • Do we have sensible/appropriate target failure rates?

    Answer: For the unit and integration tests, zero. The target failure rate of fuzz testing is not under our control.

  • Do we need to check the targets?

    Answer: For the unit and integration tests, no. Since the fuzz testing runs and is investigated in the original nlohmann/json, there is no need to check the target.

  • Are we achieving the targets?

    Answer: For the unit and integration tests, yes. The achieving of the targets for the fuzz-testing is evaluated within the original nlohmann/json.

  • Are all underlying assumptions and target conditions for the analysis specified?

    Answer: Since none of the unit and integration tests are expected to fail, there is no further analysis of the results besides verifying the expectation. In case any test fails ever, the failure of the CI-pipeline encourages the maintainer to investigate.

  • Have the underlying assumptions been verified using known good data?

    Answer: The assumption that all unit and integration tests succeed under the expected conditions is demonstrated by the non-failure of the CI-Pipeline.

  • Has the Misbehaviour identification process been verified using known bad data?

    Answer: Misbehaviours published on nlohmann/json usually provide minimal working examples for reproducing the faulty behaviour, enabling everyone to verify the identified misbehaviours. There is, however, no automatic process for the identification of misbehaviours.

  • Are results shown to be reproducible?

    Answer: It is expected that the tests can be reproduced on every modern sufficiently powerful machine.

Fallacies:

None

Graph:

No Image

date-time

TA-ANALYSIS

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-09-10 16:20:35

0.00

2025-09-11 09:23:26

0.00

2025-09-12 13:29:30

0.00

2025-09-15 11:48:05

0.00

2025-09-15 16:25:33

0.00

2025-09-16 11:32:53

0.00

2025-09-16 15:15:09

0.00

2025-09-17 09:31:47

0.00

2025-09-19 12:26:04

0.00

2025-09-26 09:39:39

0.00

2025-09-29 10:05:38

0.00

2025-10-06 15:13:57

0.00

2025-10-07 17:09:21

0.00

2025-10-14 18:05:11

0.00

2025-10-23 16:32:31

0.00

2025-10-24 08:52:52

0.00

2025-10-24 10:13:58

0.00

2025-10-27 12:37:56

0.00

2025-11-03 13:09:23

0.00

2025-11-06 18:37:31

0.00

2025-11-14 15:50:06.044992

0.00


TA-BEHAVIOURS#

Expected or required behaviours for the nlohmann/json library are identified, specified, verified and validated based on analysis.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-BEHAVIOURS-CHECKLIST.md

Checklist for TA-BEHAVIOURS from Codethink#

  • How has the list of Expectations varied over time?

    Answer: The list of expectations is taken from here, whose development can be retraced using git.

  • How confident can we be that this list is comprehensive?

    Answer: The list of expectations has been collected amongst the stakeholders in S-CORE, so that we are very confident that the list is comprehensive. The expectation to serialize user data into JSON format

  • Could some participants have incentives to manipulate information?

    Answer: We can not imagine any reason.

  • Could there be whole categories of Expectations still undiscovered?

    Answer: It is unlikely, but the parsing of CBOR could become relevant at some time.

  • Can we identify Expectations that have been understood but not specified?

    Answer: No.

  • Can we identify some new Expectations, right now?

    Answer: No.

  • How confident can we be that this list covers all critical requirements?

    Answer: We can not think of any more critical requirement of a JSON parser in the sense of RFC8259 than to parse JSON data in the sense of RFC8259.

  • How comprehensive is the list of tests?

    Answer: Currently, the branch coverage is 93.865% and the line coverage is 99.186%, cf. JLS-27.

  • Is every Expectation covered by at least one implemented test?

    Answer: Yes, both of the expectations are covered by at least one implemented test. Moreover, each statement supporting the expectations is covered by a test with the exception of WFJ-06.

  • Are there any Expectations where we believe more coverage would help?

    Answer: No.

  • How do dependencies affect Expectations, and are their properties verifiable?

    Answer: The library nlohmann/json does not have external dependencies, so that there are in particular none that affect Expectations.

  • Are input analysis findings from components, tools, and data considered in relation to Expectations?

    Answer: No findings have been found.

Fallacies:

None

Graph:

No Image

date-time

TA-BEHAVIOURS

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-CONFIDENCE#

Confidence in the nlohmann/json library is measured based on results of analysis.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-CONFIDENCE-CHECKLIST.md

Checklist for TA-CONFIDENCE from Codethink#

  • What is the algorithm for combining/comparing the scores?

    Answer: It is the standard algorithm of trudag.

  • How confident are we that this algorithm is fit for purpose?

    Answer: We have no reason to assume that the standard algorithm is not fit for our purpose.

  • What are the trends for each score?

    Answer: CAN NOT BE ANSWERED NOW

  • How well do our scores correlate with external feedback signals?

    Answer: CAN NOT BE ANSWERED NOW

Fallacies:

None

Graph:

No Image

date-time

TA-CONFIDENCE

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-CONSTRAINTS#

Constraints on adaptation and deployment of eclipse-score/inc_nlohmann_json are specified.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-CONSTRAINTS-CHECKLIST.md

Checklist for TA-CONSTRAINTS from Codethink#

  • Are the constraints grounded in realistic expectations, backed by real-world examples?

    Answer: The constraints originate from S-CORE (e.g. AOU-04, AOU-05, AOU-07, AOU-21), the standard RFC-8259 (e.g. AOU-05, AOU-20, AOU-21) and the library nlohmann/json itself (AOU-06, AOU-20) in order to ensure that the expectations are met.

  • Do they effectively guide downstream consumers in expanding upon existing Statements?

    Answer: ?????

  • Do they provide clear guidance for upstreams on reusing components with well-defined claims?

    Answer: ?????

  • Are any Statements explicitly designated as not reusable or adaptable?

    Answer: No statement has been intentionally designated as not reusable or adaptable.

  • Are there worked examples from downstream or upstream users demonstrating these constraints in practice?

    Answer: ????

  • Have there been any documented misunderstandings from users, and are these visibly resolved?

    Answer: Yes, it is documented that the brace initialisation (cf. AOU-06) regularly leads to confusion, cf. here.

  • Do external users actively keep up with updates, and are they properly notified of any changes?

    Answer: External users of the library are not necessarily automatically notified of an update, and are neither assumed nor required to keep up to date. If the external user forks the github repository, however, then github shows automatically whenever the upstream changes.

Fallacies:

None

Graph:

No Image

date-time

TA-CONSTRAINTS

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-DATA#

Data in eclipse-score/inc_nlohmann_json is collected from tests, and from monitoring of deployed software, according to specified objectives.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-DATA-CHECKLIST.md

Checklist for TA-DATA from Codethink#

  • Is all test data stored with long-term accessibility?

    Answer: If we assume that github is long-term accessible, then yes.

  • Is all monitoring data stored with long-term accessibility?

    Answer: There are no monitoring data.

  • Are extensible data models implemented?

    Answer: The data are stored in an sqlite database.

  • Is sensitive data handled correctly (broadcasted, stored, discarded, or anonymised) with appropriate encryption and redundancy?

    Answer: There are no sensitive data produced, collected or stored.

  • Are proper backup mechanisms in place?

    Answer: Not more than the default mechanisms of github.

  • Are storage and backup limits tested?

    Answer: No.

  • Are all data changes traceable?

    Answer: Yes, due to the usage of github.

  • Are concurrent changes correctly managed and resolved?

    Answer: Yes, due to the usage of github.

  • Is data accessible only to intended parties?

    Answer: Since the library is open source, there are no unintended parties.

  • Are any subsets of our data being published?

    Answer: Yes, the collected data are publicly available.

Fallacies:

None

Graph:

No Image

date-time

TA-DATA

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-09-10 16:20:35

0.00

2025-09-11 09:23:26

0.00

2025-09-12 13:29:30

0.00

2025-09-15 11:48:05

0.00

2025-09-15 16:25:33

0.00

2025-09-16 11:32:53

0.00

2025-09-16 15:15:09

0.00

2025-09-17 09:31:47

0.00

2025-09-19 12:26:04

0.00

2025-09-26 09:39:39

0.00

2025-09-29 10:05:38

0.00

2025-10-06 15:13:57

0.00

2025-10-07 17:09:21

0.00

2025-10-14 18:05:11

0.00

2025-10-23 16:32:31

0.00

2025-10-24 08:52:52

0.00

2025-10-24 10:13:58

0.00

2025-10-27 12:37:56

0.00

2025-11-03 13:09:23

0.00

2025-11-06 18:37:31

0.00

2025-11-14 15:50:06.044992

0.00


TA-FIXES#

In the nlohmann/json repository, known bugs or misbehaviours are analysed and triaged, and critical fixes or mitigations are implemented or applied.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-FIXES-CHECKLIST.md

Checklist for TA-FIXES from Codethink#

  • How many faults have we identified in XYZ?

    Answer: There are no identifiable faults concerning the expectations.

  • How many unknown faults remain to be found, based on the number that have been processed so far?

    Answer: It is unlikely that there are unknown faults concerning the expectations.

  • Is there any possibility that people could be motivated to manipulate the lists (e.g. bug bonus or pressure to close).

    Answer: Since the project is entirely open source, it is quite unlikely.

  • How many faults may be unrecorded (or incorrectly closed, or downplayed)?

    Answer: There may be none, at least when it concerns the expectations.

  • How do we collect lists of bugs and known vulnerabilities from components?

    Answer: We pull the list from the issues reported to nlohmann/json labelled as bug and open or opened since the last release. This list is then stored using github, thereby enabling a traceability of the list.

  • How (and how often) do we check these lists for relevant bugs and known vulnerabilities?

    Answer: Whenever we generate the documentation, the list is pulled. If there is an issue previously unrecorded, then the maintainer is encouraged by the change of the trustable score to check this issue on applicability.

  • How confident can we be that the lists are honestly maintained?

    Answer: We can not imagine a reason why the list could be dishonestly maintained.

  • Could some participants have incentives to manipulate information?

    Answer: We can not think of a reason why.

  • How confident are we that the lists are comprehensive?

    Answer: We have no reason to assume that discovered bugs are not reported to nlohmann/json.

  • Could there be whole categories of bugs/vulnerabilities still undiscovered?

    Answer: There could be a mislabelling of issues, but it is unlikely that there are bugs or vulnerabilities not labelled as bug, instead it is likely that perceived issues due to a misunderstanding of how the library works are labelled as bug.

  • How effective is our triage/prioritisation?

    Answer: ????? Since it is not intended to fix the library within S-CORE, but instead leave the development to the original nlohmann/json, there is no need to have a triage or prioritisation.

  • How many components have never been updated?

    Answer: None, the single component is up to date.

  • How confident are we that we could update them?

    Answer: If nlohmann/json would release an new version, we are very confident that we can update to that version.

  • How confident are we that outstanding fixes do not impact our Expectations?

    Answer: We have not found any outstanding fixes impacting our expectations.

  • How confident are we that outstanding fixes do not address Misbehaviours?

    Answer: For all of the none identified misbehaviours, we are very confident that none of the outstanding fixes do not address them.

Fallacies:

None

Graph:

No Image

date-time

TA-FIXES

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-09-10 16:20:35

0.00

2025-09-11 09:23:26

0.00

2025-09-12 13:29:30

0.00

2025-09-15 11:48:05

0.00

2025-09-15 16:25:33

0.00

2025-09-16 11:32:53

0.00

2025-09-16 15:15:09

0.00

2025-09-17 09:31:47

0.00

2025-09-19 12:26:04

0.00

2025-09-26 09:39:39

0.00

2025-09-29 10:05:38

0.00

2025-10-06 15:13:57

0.00

2025-10-07 17:09:21

0.00

2025-10-14 18:05:11

0.00

2025-10-23 16:32:31

0.00

2025-10-24 08:52:52

0.00

2025-10-24 10:13:58

0.00

2025-10-27 12:37:56

0.00

2025-11-03 13:09:23

0.00

2025-11-06 18:37:31

0.00

2025-11-14 15:50:06.044992

0.00


TA-INDICATORS#

In eclipse-score/inc_nlohmann_json, advanced warning indicators for misbehaviours are identified, and monitoring mechanisms are specified, verified and validated based on analysis.

Supported Requests:

Supporting Items:

None

References:

  • TSF/trustable/assertions/TA-INDICATORS-CHECKLIST.md

Checklist for TA-INDICATORS from Codethink#

  • How appropriate/thorough are the analyses that led to the indicators?

    Answer: Since no misbehaviours for the use of the library for parsing and verification of JSON data according to RFC8259 have been identified, no warning indicators are implemented.

  • How confident can we be that the list of indicators is comprehensive?

    Answer: There are no warning indicators implemented, of which we are very confident.

  • Could there be whole categories of warning indicators still missing?

    Answer: Yes, there could. Within S-CORE, however, any warning indicator that is not natively implemented within the original nlohmann/json should be implemented in the wrapper defining the interface between the library and the project using it.

  • How has the list of advance warning indicators varied over time?

    Answer: It has stayed constant.

  • How confident are we that the indicators are leading/predictive?

    Answer: There are none.

  • Are there misbehaviours that have no advance warning indicators?

    Answer: There are no misbehaviours identified.

  • Can we collect data for all indicators?

    Answer: There are currently no implemented indicators, so that no data are collected.

  • Are the monitoring mechanisms used included in our Trustable scope?

    Answer: No, but there are also none.

  • Are there gaps or trends in the data?

    Answer: There are no data where gaps or trends could be identified.

  • If there are gaps or trends, are they analysed and addressed?

    Answer: There are no data.

  • Is the data actually predictive/useful?

    Answer: There are no data.

  • Are indicators from code, component, tool, or data inspections taken into consideration?

    Answer: There are no indicators.

Fallacies:

None

Graph:

No Historic Data Found


TA-INPUTS#

All inputs to the nlohmann/json library are assessed, to identify potential risks and issues.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-INPUTS-CHECKLIST.md

Checklist for TA-INPUTS from Codethink#

The single_include/nlohmann/json.hpp is the single and only component of the library.

  • Are there components that are not on the list?

    Answer: No.

  • Are there assessments for all components?

    Answer: ?????

  • Has an assessment been done for the current version of the component?

    Answer: ?????

  • Have sources of bug and/or vulnerability data been identified?

    Answer: There are no bug and/or vulnerability data.

  • Have additional tests and/or Expectations been documented and linked to component assessment?

    Answer: ??????

  • Are component tests run when integrating new versions of components?

    Answer: There are no further components.

  • Are there tools that are not on the list?

    Answer: The library does not use external tools, except for the tools provided by the C++ standard library.

  • Are there impact assessments for all tools?

    Answer: ?????? The library does not use external tools for which an impact assessment has to be done.

  • Have tools with high impact been qualified?

    Answer: There are no tools with high impact.

  • Were assessments or reviews done for the current tool versions?

    Answer: ????? The library does not use external tools for which an impact assessment has to be done.

  • Have additional tests and/or Expectations been documented and linked to tool assessments?

    Answer: No.

  • Are tool tests run when integrating new versions of tools?

    Answer: The library does not use external tools for which a new version needs to be integrated.

  • Are tool and component tests included in release preparation?

    Answer: Yes, the tests of the library are included in the release.

  • Can patches be applied, and then upstreamed for long-term maintenance?

    Answer: Yes, if ever a misbehaviour is found and patched, then a pull-request to the original nlohmann/json repository can be opened to upstream the changes.

  • Do all dependencies comply with acceptable licensing terms?

    Answer: Yes, the library is licensed under MIT License .

Fallacies:

None

Graph:

No Image

date-time

TA-INPUTS

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-ITERATIONS#

All constructed iterations of the nlohmann/json library include source code, build instructions, tests, results and attestations.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-ITERATIONS-CHECKLIST.md

Checklist for TA-ITERATIONS from Codethink#

  • How much of the software is provided as binary only, expressed as a fraction of the BoM list?

    Answer: None.

  • How much is binary, expressed as a fraction of the total storage footprint?

    Answer: None.

  • For binaries, what claims are being made and how confident are we in the people/organisations making the claims?

    Answer: There are no binaries.

  • For third-party source code, what claims are we making, and how confident are we about these claims?

    Answer: There is no third-party source code in the library.

  • For software developed by us, what claims are we making, and how confident are we about these claims?

    Answer: This is the remainder of the documentation.

Fallacies:

None

Graph:

No Image

date-time

TA-ITERATIONS

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-METHODOLOGIES#

Manual methodologies applied for the nlohmann/json library by contributors, and their results, are managed according to specified objectives.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-METHODOLOGIES-CHECKLIST.md

Checklist for TA-METHODOLOGIES from Codethink#

This project follows purely the Methodologies of Eclipse S-CORE.

  • Are the identified gaps documented clearly to justify using a manual process?

    Answer:

  • Are the goals for each process clearly defined?

    Answer:

  • Is the sequence of procedures documented in an unambiguous manner?

    Answer:

  • Can improvements to the processes be suggested and implemented?

    Answer:

  • How frequently are processes changed?

    Answer:

  • How are changes to manual processes communicated?

    Answer:

  • Are there any exceptions to the processes?

    Answer:

  • How is evidence of process adherence recorded?

    Answer:

  • How is the effectiveness of the process evaluated?

    Answer:

  • Is ongoing training required to follow these processes?

    Answer:

Fallacies:

None

Graph:

No Image

date-time

TA-METHODOLOGIES

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-MISBEHAVIOURS#

Prohibited misbehaviours for the nlohmann/json library are identified, and mitigations are specified, verified and validated based on analysis.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-MISBEHAVIOURS-CHECKLIST.md

Checklist for TA-MISBEHAVIOURS from Codethink#

  • How has the list of misbehaviours varied over time?

    Answer: The list of misbehaviours is collected using github and its development is thereby understandable.

  • How confident can we be that this list is comprehensive?

    Answer: Due to the collaborative nature of the open source community, we deem it quite unlikely that there are any known misbehaviours which are not reported to the repository nlohmann/json.

  • How well do the misbehaviours map to the expectations?

    Answer: There are no identified misbehaviours that tangent the expectations.

  • Could some participants have incentives to manipulate information?

    Answer: We could not think of an incentive that any collaborateur could have to manipulate the information.

  • Could there be whole categories of misbehaviours still undiscovered?

    Answer: Due to the wide use and long-standing development of the library it is quite unlikely that any major misbehaviors, in particular regarding the parsing and validating of JSON data in the sense of RFC-8259, is undiscovered.

  • Can we identify misbehaviours that have been understood but not specified?

    Answer: No.

  • Can we identify some new misbehaviours, right now?

    Answer: No.

  • Is every misbehaviour represented by at least one fault induction test?

    Answer: Since there are no misbehaviours that concern the use within S-CORE, no.

  • Are fault inductions used to demonstrate that tests which usually pass can and do fail appropriately?

    Answer: ?????? No.

  • Are all the fault induction results actually collected?

    Answer: ?????? No.

  • Are the results evaluated?

    Answer: ?????? No.

  • Do input analysis findings on verifiable tool or component claims and features identify additional misbehaviours or support existing mitigations?

    Answer: Currently, there is no analysis which identifies additional misbehaviours. The only such analysis is indirectly via the analysis of the fuzz testing, which currently does not identifies additional misbehaviours.

Fallacies:

None

Graph:

No Image

date-time

TA-MISBEHAVIOURS

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-RELEASES#

Construction of releases for the nlohmann/json library is fully repeatable and the results are fully reproducible, with any exceptions documented and justified.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-RELEASES-CHECKLIST.md

Checklist for TA-RELEASES from Codethink#

  • How confident are we that all components are taken from within our controlled environment?

    Answer: This library does not take anything from outside of this repository.

  • How confident are we that all of the tools we are using are also under our control?

    Answer: The version of nlohmann/json that is documented with this documentation is under the full control of the Eclipse S-CORE organisation.

  • Are our builds repeatable on a different server, or in a different context?

    Answer: Since there is no “build” of the header-only library, yes.

  • How sure are we that our builds don’t access the internet?

    Answer: There is no implemented access to the internet in the library itself. The testsuite is downloaded from a within Eclipse S-CORE.

  • How many of our components are non-reproducible?

    Answer: The single component is reproducible.

  • How confident are we that our reproducibility check is correct?

    Answer: Quite.

Fallacies:

None

Graph:

No Image

date-time

TA-RELEASES

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-SUPPLY_CHAIN#

All sources and tools for the nlohmann/json library are mirrored in our controlled environment.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-SUPPLY_CHAIN-CHECKLIST.md

Checklist for TA-SUPPLY_CHAIN from Codethink#

  • Could there be other components, missed from the list?

**Answer**:  Since the library does not contain any external components, no.
  • Does the list include all toolchain components?

**Answer**:  Since the library does not contain any external components, yes.
  • Does the toolchain include a bootstrap?

**Answer**:  ???? No.
  • Could the content of a mirrored project be compromised by an upstream change?

**Answer**:  Since the library does not contain any external components, no.
  • Are mirrored projects up to date with the upstream project?

**Answer**:  Yes, the library is up to date with the most recent release of the original nlohmann/json
  • Are mirrored projects based on the correct upstream?

**Answer**:  Yes.

Fallacies:

None

Graph:

No Image

date-time

TA-SUPPLY_CHAIN

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-TESTS#

All tests for the nlohmann/json library, and its build and test environments, are constructed from controlled/mirrored sources and are reproducible, with any exceptions documented.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-TESTS-CHECKLIST.md

Checklist for TA-TESTS from Codethink#

  • How confident are we that our test tooling and environment setups used for tests, fault inductions, and analyses are reproducible?

    Answer: The test can be reproduced any time on any machine running the versions of the operating systems and compilers as provided (TODO, cf. AOU-14)

  • Are any exceptions identified, documented and justified?

    Answer: To the best of our understanding, there are no exceptions identified.

  • How confident are we that all test components are taken from within our controlled environment?

    Answer: All tests are either self-contained or download test data from within Eclipse S-CORE.

  • How confident are we that all of the test environments we are using are also under our control?

    Answer: ???? The environments are standard docker images of ubuntu and standard versions of compilers.

  • Do we record all test environment components, including hardware and infrastructure used for exercising tests and processing input/output data?

    Answer: No, since the tests are independent from hard-ware, these data are not collected.

  • How confident are we that all tests scenarios are repeatable?

    Answer: All test scenarios are repeated daily in the CI pipeline.

Fallacies:

None

Graph:

No Image

date-time

TA-TESTS

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-UPDATES#

nlohmann/json library components, configurations and tools are updated under specified change and configuration management controls.

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-UPDATES-CHECKLIST.md

Checklist for TA-UPDATES from Codethink#

  • Where are the change and configuration management controls specified?

    Answer: WIP

  • Are these controls enforced for all of components, tools, data, documentation and configurations?

    Answer: The S-CORE Methodology is followed, compliance with which enforces the change process to be followed.

  • Are there any ways in which these controls can be subverted, and have we mitigated them?

    Answer: Yes, the change process can just not be followed. We have no real method to enforce it other than to trust that the committers follow the S-CORE processes.

  • Does change control capture all potential regressions?

    Answer: Due to the test coverage of 99.186%, it is unlikely that a potential regression is not captured.

  • Is change control timely enough?

    Answer: Not applicable, as far as can be understood right now, there is no immanent need to keep the library up to date.

  • Are all guidance and checks understandable and consistently followed?

    Answer: WIP

Fallacies:

None

Graph:

No Image

date-time

TA-UPDATES

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-11-14 15:50:06.044992

0.00


TA-VALIDATION#

All specified tests are executed repeatedly, under defined conditions in controlled environments, according to specified objectives. (To revisit)

Supported Requests:

Supporting Items:

References:

  • TSF/trustable/assertions/TA-VALIDATION-CHECKLIST.md

Checklist for TA-VALIDATION from Codethink#

I DO NOT FEEL CONFIDENT TO Answer THIS!

  • Is the selection of tests correct?

    Answer: ???? Who could tell this?

  • Are the tests executed enough times?

    Answer: ???? Define “enough times”

  • How confident are we that all test results are being captured?

    Answer: ???? How fine-grained is a test-result supposed to be?

  • Can we look at any individual test result, and establish what it relates to?

    Answer: ????

  • Can we trace from any test result to the expectation it relates to?

    Answer: No, there are more tests than expectations, and in particular tests that relate to the inner workings of the library which are not used by S-CORE.

  • Can we identify precisely which environment (software and hardware) were used?

    Answer: ???? How precisely shall that be? Moreover, the tests are supposed to run independent of underlying hardware, since this is a software.

  • How many pass/fail results would be expected, based on the scheduled tests?

    Answer: Zero fails.

  • Do we have all of the expected results?

    Answer: Yes.

  • Do we have time-series data for all of those results?

    Answer: Yes, there are time-series data.

  • If there are any gaps, do we understand why?

    Answer: ???? Define gaps

  • Are the test validation strategies credible and appropriate?

    Answer: ???? Define test validation strategies

  • What proportion of the implemented tests are validated?

    Answer: ???? None.

  • Have the tests been verified using known good and bad data?

    Answer: ????

Fallacies:

None

Graph:

No Image

date-time

TA-VALIDATION

2025-08-26 12:04:48

0.00

2025-08-26 15:38:44

0.00

2025-08-27 10:29:06

0.00

2025-08-27 13:23:16

0.00

2025-08-27 14:20:48

0.00

2025-08-27 16:24:53

0.00

2025-08-28 17:29:59

0.00

2025-08-29 15:47:02

0.00

2025-09-10 16:20:35

0.00

2025-09-11 09:23:26

0.00

2025-09-12 13:29:30

0.00

2025-09-15 11:48:05

0.00

2025-09-15 16:25:33

0.00

2025-09-16 11:32:53

0.00

2025-09-16 15:15:09

0.00

2025-09-17 09:31:47

0.00

2025-09-19 12:26:04

0.00

2025-09-26 09:39:39

0.00

2025-09-29 10:05:38

0.00

2025-10-06 15:13:57

0.00

2025-10-07 17:09:21

0.00

2025-10-14 18:05:11

0.00

2025-10-23 16:32:31

0.00

2025-10-24 08:52:52

0.00

2025-10-24 10:13:58

0.00

2025-10-27 12:37:56

0.00

2025-11-03 13:09:23

0.00

2025-11-06 18:37:31

0.00

2025-11-14 15:50:06.044992

0.00