Integration of SR4HA#2
Conversation
* Made transforms callable * Changed test to use new callable transform syntax
* Add rename transform * Add test for rename transform
* Add `preload_alp_data` script to convert data to single file * Adjust run example to use pre processed data * Remove old data folders * Add alp data back as parquet files
…cean#159) Bumps [com.google.protobuf:protobuf-java](https://github.com/protocolbuffers/protobuf) from 3.23.0 to 3.25.5. - [Release notes](https://github.com/protocolbuffers/protobuf/releases) - [Changelog](https://github.com/protocolbuffers/protobuf/blob/main/protobuf_release.bzl) - [Commits](protocolbuffers/protobuf@v3.23.0...v3.25.5) --- updated-dependencies: - dependency-name: com.google.protobuf:protobuf-java dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Add match_sampling_rate.py * Add seeds for all modules used in boiler example * Add function to initialize random states for different libraries * Fix test_train_test_split * Fix test_train_test_split * Set different random seeds for different libraries * Fix test
* Simplified row length check expression * Pin polars version to last working
* update landing page * add 'consider' * Flowcean is designed + Ref --------- Co-authored-by: Hendrik Rose <hendrik.willhelm.rose@tuhh.de>
* Implement high- and lowpass filter * Add test for high- and lowpass filter * Fixed comment * Fixed variable names * Fix import path * Fix old calling syntax * Rename from Filter to SignalFilter
* Implement one-hot transform * Add test for one-hot transform * Allow arbitrary data for one-hot * Remove integer test * Added optional argument for categorical values * Move creation from dataframe into static creation method * Adjust tests to use new creation method * Add test for explicit categories * Added docstring for `from_dataframe` method * Add check for missing values/categories * Adjust calling and overloads to new syntax * Fix typo * Remove comment * Move check_for_missing_categories to constructor / init * Fix test
* Introduce uv for project management * update documentation * mv LICENSE.txt -> LICENSE * emojies * preparation for automatic releases * improve documentation * PEP723 script inline metadata * add cd in installation guide
* Add itl-nas remote * Remove new line
* Add one-cold transform with tests * Improved inline documentation * Add example in main docstring * Fix typo
* Add filter transform with test * Add sympy as explicit dependency * Update lock file with sympy * Replace filter_mode with class based system * Remove old literal type * Sort __all__ * Add examples in docstring * Add docstring for FilterExpression
* Reworked transforms and environments to yield LazyFrames * Fix tests to work with lazy frames * Fix way to get column names of lazy frames * Fix tests for lazy frames * Change learner inputs and outputs back to DataFrame * Fix some typo violations * Add caching for Dataset length * Fixed bad data types in models and learners * Remove todo comment. Code is going to change anyway * Fix more type problems * Adapt Cast transform to lazy * Fix bad return * Fix streaming environment not working with lazy frames * Fix Lambda transform signature * Fix new transforms for lazy processing * Fix error in ode_environment test
* Reset on main * Move deps to dev-dependencies * Move deps to dev-dependencies
* add a script to save particle cloud images * plot particle position and orientation * plot mean positions of the particles * plot particle mean orientation * rotate plot by mean orientation * before and after rotation comparison * discard the particles outside the defined region * save the plot as 36x36 pixel image * process and generate images from cached ros json data * create a polars time series df containing images * Create particle cloud image transform class * rename and cleanup * Improve data processing * Improve image processing * use numpy vectorization * delete outdated files * push run.py * Optimize memory usage and speed * make ruff happy * Apply pre-commit fixes: updated .gitignore * Fix ruff formatting issues * update test assertion * Add tests for particle cloud image transform * fix ruff * Keep particle cloud feature * fix ruff * organize files * apply transforms in sequence * uv.lock * delete particle cloud statistics transform * remove ParticleCloudStatistics from exports --------- Co-authored-by: Markus Knitt <markus.knitt@tuhh.de>
* Add first transform with test * Add Last transform with tests * Add optional replace flag
* Add median transform * Update src/flowcean/polars/transforms/median.py Co-authored-by: Maximilian Schmidt <maximilian@schmidt.so> --------- Co-authored-by: Maximilian Schmidt <maximilian@schmidt.so>
* Add mean transform for time series data * Add non-destructive calculation * Update src/flowcean/polars/transforms/mean.py Co-authored-by: Maximilian Schmidt <maximilian@schmidt.so> * Add replace flag --------- Co-authored-by: Maximilian Schmidt <maximilian@schmidt.so>
* Add replace flag and test for feature replacement * Extend median test
* Add Mode transform with tests * Add helper method to get the time series datatype * Fixed disabled tests * Return the first mode if multiple values have the same occurrence count * Fix multi mode behaviour and added test * Return the largest mode if there are multiple
… time series features (flowcean#242) * Add support for nearest interpolation method * Add fill strategy for missing interpolation values * Add test for scalar time series * Add support scalar time series
* Add localization status transform * Add test for localization status transform
* Add unnest transform * Remove numpy import * Rename test and add example
* Add dictionary support for cast and test new argument option * Update docstring
newLabAspect
left a comment
There was a problem hiding this comment.
Please add typing. Maybe you could run the flowcean checks on your side.
| def main() -> None: | ||
| flowcean.cli.initialize_logging() | ||
|
|
||
| data = DataFrame.from_uri(uri="file:C:/Users/49157/Desktop/PA/SR_Original_code/SymbolicRegression4HA/data/converter/short_wto_zeros_data_converter_omega400e3_beta40e3_Q10_theta60.csv") |
There was a problem hiding this comment.
This won't work for other people. Add the data to DVC and make it a relative path
There was a problem hiding this comment.
Do not track these files. Those are artifacty.
There was a problem hiding this comment.
Do not track these files. Those are artifacty.
There was a problem hiding this comment.
This should be a derived class of Transform.
There was a problem hiding this comment.
the output should not be part of the inputs.
There was a problem hiding this comment.
just put a pass here for now as we do not use this here.
There was a problem hiding this comment.
just put a pass here for now as we do not use this here.
| # Load the CSV | ||
| df = pd.read_csv("C:/Users/49157/Documents/MATLAB/buck_converter_output.csv") | ||
|
|
||
| # Strip any extra spaces from column names just in case |
| df = pd.read_csv("hybrid_sensor_data.csv") | ||
| df_subset = df.head(2000) | ||
| df_subset.to_csv('sampled_data.csv', index=False) | ||
| print("First 2000 datapoints have been saved to sampled_data.csv") |
There was a problem hiding this comment.
rather use logging than printing
| df_subset.to_csv('sampled_data.csv', index=False) | ||
| print("First 2000 datapoints have been saved to sampled_data.csv") | ||
|
|
||
| # Get the total number of rows in the DataFrame |
There was a problem hiding this comment.
Add some description as a header? And add documentation, if possible.
There was a problem hiding this comment.
Is this the same as the csv read for the other example?
Why not turning this into a transform which just slices the first samples?
There was a problem hiding this comment.
Add descirption or documentation.
| X_train = current_frame[self.learner.feature_names] | ||
| y_train = current_frame[self.target_var] | ||
| self.learner.fit(X_train, y_train) | ||
| print(self.learner.get_best()) # Debug |
There was a problem hiding this comment.
Remove this or change to useful logging
Updated STD
No description provided.