Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,22 @@

## _Unreleased_

---

## v0.14.0

---

- Add `SpectraFit` to [Conda-Forge][2] as [spectrafit][3] package.
- Extend `SpectraFit` to print current peak values as `dataframe`
in Jupyter-Notebook.
- Add converters for _input-_, _output-_, and _data-files_.
- Add extended _output-print_ for `SpectraFit` in Jupyter-Notebook.

## v0.13.1

---

- Fix crashed regression analysis due to _negative_ values in the `y`-data.

## v0.13.0
Expand Down
31 changes: 31 additions & 0 deletions docs/api/converter_api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
!!! info "About the Converter API"

The **Converter API** is a new feature in the v0.12.x release of
`SpectraFit` with major focus on:

1. Data Validation
2. Settings Management

In general, input and data files are converted to the internal data format,
which are [dictionaries][1] for the input data and [pandas dataframes][2]
for the data files. The Converter API is realized by using the
[`ABC`-class][3] and the [`@abstractmethod`][4] decorator, while
the File API is using the [pydantic][5] library.

### Meta Data Converter Class

::: spectrafit.plugins.converter

### Input and Output File Converter for object-oriented formats

::: spectrafit.plugins.file_converter

### Data Converter for rational data formats like CSV, Excel, etc.

::: spectrafit.plugins.data_converter

[1]: https://docs.python.org/3/tutorial/datastructures.html#dictionaries
[2]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html
[3]: https://docs.python.org/3/library/abc.html#abc.ABC
[4]: https://docs.python.org/3/library/abc.html#abc.abstractmethod
[5]: https://pydantic-docs.helpmanual.io/
4 changes: 4 additions & 0 deletions docs/api/data_model_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,7 @@
### Tools and Utilities

::: spectrafit.api.tools_model

### File Model API

::: spectrafit.api.file_model
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
With the command `spectrafit-converter` input and output files can be converted. The supported file formats are:
With the command `spectrafit-file-converter` input and also output files can be converted. The supported file formats are:

1. `.json`
2. `.yaml`
3. `.toml`
2. `.yaml` or `.yml`
3. `.toml` or `.lock`(output only)

```shell
spectrafit-converter -h
usage: spectrafit-converter [-h] [-f {lock,ymltoml,yaml,json}] infile
spectrafit-file-converter -h
usage: spectrafit-file-converter [-h] [-f {lock,toml,yml,yaml,json}] infile

Converter for 'SpectraFit' input and output files.

Expand All @@ -15,7 +15,7 @@ With the command `spectrafit-converter` input and output files can be converted.

options:
-h, --help show this help message and exit
-f {lock,ymltoml,yaml,json}, --format {lock,ymltoml,yaml,json}
-f {lock,toml,yml,yaml,json}, --format {lock,ymltoml,yaml,json}
File format for the conversion.

```
Expand Down
69 changes: 69 additions & 0 deletions docs/plugins/spectrafit-data-converter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
With the command `spectrafit-data-converter` data files can be converted to
`CSV` or [pandas dataframes][1]. Currently, the following data formats are

- [x] The [`Athena`][2] data format
- [x] Text files with a header and a data seperator by space or tab
- [ ] More formats are coming soon

```shell
➜ spectrafit-data-converter -f ATHENA -h
usage: spectrafit-data-converter [-h] [-f {TXT,ATHENA}] infile

Converter for 'SpectraFit' from data files to CSV files.

positional arguments:
infile Filename of the data file to convert.

options:
-h, --help show this help message and exit
-f {TXT,ATHENA}, --file-format {TXT,ATHENA}
File format for the conversion.
```

!!! example "From ATHENA to CSV"

To convert a data file from the `Athena` format to `CSV` use:

```shell
spectrafit-data-converter Examples/athena.nor -f ATHENA
```

The original data file looks like this, but can contains more rows:

```txt
# XDI/1.0 Demeter/0.9.26
# Demeter.output_filetype: multicolumn normalized mu(E)
# Element.symbol: V
# Element.edge: K
# Column.1: energy eV
# Column.2: JZP-4-merged
#------------------------
# energy JZP-4-merged
5263.8492 0.12737417
5273.8501 0.10231758
5283.8503 0.81114410E-01
5293.8492 0.61588687E-01
5303.8493 0.47158833E-01
5313.8497 0.35236642E-01
5323.8502 0.25314870E-01
5333.8506 0.18438437E-01
5343.8501 0.12077480E-01
```

will be converted to:

```csv
energy,JZP-4-merged
5263.8492,0.12737417
5273.8501,0.10231758
5283.8503,0.08111441
5293.8492,0.06158869
5303.8493,0.04715883
5313.8497,0.03523664
5323.8502,0.02531487
5333.8506,0.01843844
5343.8501,0.01207748
```

[1]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html
[2]: https://bruceravel.github.io/demeter/documents/Athena/other/plugin.html
4 changes: 3 additions & 1 deletion mkdocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,8 @@ nav:
- Fitting: doc/fitting.md
- Statistics: doc/statistics.md
- Plugins:
- File-Format-Conversion: plugins/spectrafit-converter.md
- File-Format-Conversion: plugins/spectrafit-file-converter.md
- Data-Format-Conversion: plugins/spectrafit-data-converter.md
- Jupyter-Notebook-Integration: plugins/jupyter-spectrafit-interface.md
- API:
- SpectraFit: api/spectrafit_api.md
Expand All @@ -171,6 +172,7 @@ nav:
- Reporting: api/reporting_api.md
- Tools: api/tools_api.md
- Data Model: api/data_model_api.md
- Converters: api/converter_api.md
- Support:
- Contact: contact.md
- License: license.md
Expand Down
5 changes: 3 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "SpectraFit"
version = "0.13.2"
version = "0.14.0"
description = "Fast fitting of 2D- and 3D-Spectra with established routines"
readme = "README.md"
authors = ["Anselm Hahn <[email protected]>"]
Expand Down Expand Up @@ -104,7 +104,8 @@ build-backend = "poetry.core.masonry.api"

[tool.poetry.scripts]
spectrafit = "spectrafit.spectrafit:command_line_runner"
spectrafit-converter = "spectrafit.plugins.converter:command_line_runner"
spectrafit-file-converter = "spectrafit.plugins.file_converter:command_line_runner"
spectrafit-data-converter = "spectrafit.plugins.data_converter:command_line_runner"
spectrafit-jupyter = "spectrafit.app.app:jupyter"

[tool.poetry.extras]
Expand Down
2 changes: 1 addition & 1 deletion spectrafit/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
"""SpectraFit, fast command line tool for fitting data."""
__version__ = "0.13.2"
__version__ = "0.14.0"
62 changes: 62 additions & 0 deletions spectrafit/api/file_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
"""Definition of the data file model."""

from pathlib import Path
from typing import Callable
from typing import List
from typing import Optional
from typing import Union

from pydantic import BaseModel
from pydantic import Field
from pydantic import validator


class DataFileAPI(BaseModel):
"""Definition of a data file."""

skiprows: Optional[int] = Field(
default=None,
description="Number of lines to skip at the beginning of the file.",
)
skipfooter: int = Field(
...,
description="Number of lines to skip at the end of the file.",
)
delimiter: str = Field(
...,
description="Delimiter to use.",
)
comment: Optional[str] = Field(
default=None,
description="Comment marker to use.",
)
names: Optional[Callable[[Path, str], Optional[List[str]]]] = Field(
default=None,
description="Column names can be provided by list of strings or a function",
)
header: Optional[Union[int, List[str]]] = Field(
default=None,
description="Column headers to use.",
)
file_suffixes: List[str] = Field(
...,
description="File suffixes to use.",
)

@validator("delimiter")
@classmethod
def check_delimiter(cls, v: str) -> Optional[str]:
"""Check if the delimiter is valid."""
if v in {" ", "\t", ",", ";", "|", r"\s+"}:
return v
else:
raise ValueError(f" {v} is not a valid delimiter.")

@validator("comment")
@classmethod
def check_comment(cls, v: str) -> Optional[str]:
"""Check if the comment marker is valid."""
if v is None or v in {"#", "%"}:
return v
else:
raise ValueError(f" {v} is not a valid comment marker.")
123 changes: 123 additions & 0 deletions spectrafit/api/test/test_file_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
"""Test the file model."""
import pytest

from spectrafit.api.file_model import DataFileAPI


def test_delimiter_space() -> None:
"""Test the delimiter validator."""
data_file = DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter=" ",
file_suffixes=[".txt"],
)
assert data_file.delimiter == " "


def test_delimiter_tab() -> None:
"""Test the delimiter validator for tab separation."""
data_file = DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter="\t",
file_suffixes=[".txt"],
)
assert data_file.delimiter == "\t"


def test_delimiter_comma() -> None:
"""Test the delimiter validator for comma separation."""
data_file = DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter=",",
file_suffixes=[".txt"],
)
assert data_file.delimiter == ","


def test_delimiter_semicolon() -> None:
"""Test the delimiter validator for semicolon separation."""
data_file = DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter=";",
file_suffixes=[".txt"],
)
assert data_file.delimiter == ";"


def test_delimiter_pipe() -> None:
"""Test the delimiter validator for pipe separation."""
data_file = DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter="|",
file_suffixes=[".txt"],
)
assert data_file.delimiter == "|"


def test_delimiter_regex() -> None:
"""Test the delimiter validator for regex separation."""
data_file = DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter=r"\s+",
file_suffixes=[".txt"],
)
assert data_file.delimiter == r"\s+"


def test_delimiter_regex_error() -> None:
"""Test the delimiter validator for regex error."""
with pytest.raises(ValueError):
DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter=r"\s",
file_suffixes=[".txt"],
)


def test_comment() -> None:
"""Test the comment marker validator."""
data_file = DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter=" ",
comment="#",
file_suffixes=[".txt"],
)
assert data_file.comment == "#"

data_file = DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter=" ",
comment="%",
file_suffixes=[".txt"],
)
assert data_file.comment == "%"

data_file = DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter=" ",
comment=None,
file_suffixes=[".txt"],
)
assert data_file.comment is None


def test_comment_error() -> None:
"""Test the comment marker validator for error."""
with pytest.raises(ValueError):
DataFileAPI(
skiprows=0,
skipfooter=0,
delimiter=" ",
comment="x",
file_suffixes=[".txt"],
)
Loading