Skip to content

Conversation

@pjanecek0
Copy link
Contributor

Please check if the PR fulfills these requirements

  • The commit message follows our guidelines
  • Tests for the changes have been added (for bug fixes / features)
  • Docs have been added / updated (for bug fixes / features)
  • A PR or issue has been opened in all impacted repositories (if any)

Does this PR already have an issue describing the problem?

#3607

What kind of change does this PR introduce?

Refactor PSS/E duplicated ID handling.

What is the current behavior?

In the previous implementation, when a data record with the same ID was found, its ID was modified and a new element was created. This does not match PSS/E behavior, which in such cases updates the data of the existing element.

What is the new behavior (if this is a feature change)?

The new implementation uses a "last record wins" strategy - when a duplicate ID is encountered in the data, the last occurrence replaces all previous ones.

Does this PR introduce a breaking change or deprecate an API?

  • Yes
  • No

If yes, please check if the following requirements are fulfilled

  • The Breaking Change or Deprecated label has been added
  • The migration steps are described in the following section

What changes might users need to make in their application due to this PR? (migration steps)

Other information:

@olperr1 olperr1 self-requested a review November 12, 2025 16:17
}

private static void testInvalid(String resourcePath, String sample, String message) {
PsseException exception = Assertions.assertThrows(PsseException.class, () -> load(resourcePath, sample));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this method is unused, you can remove it.

unique.add(psseObject);
}
foundIds.add(id);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your method is perfectly valid, but since the map values are never null, we can simplify it a bit to be faster. Here is my proposal:

private <T> List<T> fixDuplicatedIds(List<T> psseObjects, Function<T, String> idBuilder, String elementTypeName) {
        List<T> unique = new ArrayList<>();
        Map<String, Integer> indexes = new HashMap<>();

        psseObjects.forEach(psseObject -> {
            var id = idBuilder.apply(psseObject);
            Integer index = indexes.get(id);
            if (index != null) {
                LOGGER.warn("Duplicated {} Id: {}", elementTypeName, id);
                unique.set(index, psseObject);
            } else {
                indexes.put(id, unique.size());
                unique.add(psseObject);
            }
        });

        return unique;
    }

@FunctionalInterface
public interface IdFixer<T> {
String fix(T t, Set<String> usedIds);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thinl this method is not used, we can remove it

@marqueslanauja
Copy link
Contributor

I checked the performance by importing a large case, and I didn’t see any impact.

@rolnico rolnico added the PSS/E label Nov 20, 2025
@rolnico rolnico changed the title Duplicate ids Refactor PSS/E duplicated ID handling Nov 20, 2025
@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants