Repro for postgres deep nested joins #815
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Explain your user case and expected results
Context
In Postgres, query column identifiers are subject to Postgres' 63 character limit. While Postgres will accept queries with identifiers that are longer than this limit, the query response will truncate the identifier to the 63 character limit.
This has an impact on Gorm when it attempts to map a query response back into a struct.
Test Case
I have added a test case with a doubly nested entity. The test case expects the
international_code
field to be populated following the join query, but it is""
. This is a silent error.This happens because the generated SQL query to retrieve this column is:
CurrentEmployerInformation__PreferredCompanyLanguage__international_code
is larger than 63 characters long.Postgres accepts this in the query correctly, but returns a truncated identifier for this column that is 63 characters long. Because of the truncation, Gorm cannot map the data to its intended column, leaving it empty.
Note that this happens silently; other columns are still populated. Using the
NamingStrategy
with max length = 63 does not affect the identifier length in query building, and thus does not address this issue.