diff --git a/README.md b/README.md index ee15f8e..8c9eaad 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,11 @@ # Hadar -![PyPI](https://img.shields.io/pypi/v/hadar) -![GitHub Workflow Status (branch)](https://img.shields.io/github/workflow/status/hadar-simulator/hadar/main/master) -![https://sonarcloud.io/dashboard?id=hadar-solver_hadar](https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=alert_status) -![https://sonarcloud.io/dashboard?id=hadar-solver_hadar](https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=coverage) -![GitHub](https://img.shields.io/github/license/hadar-simulator/hadar) +[![PyPI](https://img.shields.io/pypi/v/hadar)](https://pypi.org/project/hadar/) +[![GitHub Workflow Status (branch)](https://img.shields.io/github/workflow/status/hadar-simulator/hadar/main/master)](https://github.com/hadar-simulator/hadar/action) +[![https://sonarcloud.io/dashboard?id=hadar-solver_hadar](https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=alert_status)](https://sonarcloud.io/dashboard?id=hadar-solver_hadar) +[![https://sonarcloud.io/dashboard?id=hadar-solver_hadar](https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=coverage)](https://sonarcloud.io/dashboard?id=hadar-solver_hadar) +[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/hadar-simulator/hadar/master?filepath=examples) +[![website](https://img.shields.io/badge/website-hadar--simulator.org-blue)](https://www.hadar-simulator.org/) +[![GitHub](https://img.shields.io/github/license/hadar-simulator/hadar)](https://github.com/hadar-simulator/hadar/blob/master/LICENSE) Hadar is a adequacy python library for deterministic and stochastic computation @@ -15,48 +17,47 @@ Each kind of network has a needs of adequacy. On one side, some network nodes ne items such as watt, litter, package. And other side, some network nodes produce items. Applying adequacy on network, is tring to find the best available exchanges to avoid any lack at the best cost. -For example, a electric grid can have some nodes wich produce too more power and some nodes wich produce not enough power. -``` -+---------+ +---------+ -| Node A | | Node B | -| | | | -| load=20 +-------------+ load=20 | -| prod=30 | | prod=10 | -| | | | -+---------+ +---------+ -``` +For example, a electric grid can have some nodes wich produce too more power and some nodes which produce not enough power. +![adequacy](examples/Get%20Started/figure.png) -In this case, A produce 10 more and B need 10 more. Perform adequecy is quiet easy : A will share 10 to B -``` -+---------+ +---------+ -| Node A | | Node B | -| | share 10 | | -| load=20 +------------>+ load=20 | -| prod=30 | | prod=10 | -| | | | -+---------+ +---------+ -``` ### Complexity comes soon Above example is simple, but problem become very tricky with 10, 20 or 500 nodes ! -Moreovore all have a price ! Node can have many type of production, and each kind of production has its unit cost. Node can have also many consumptions with specific unavailability cost. Links between node have also max capacity and cost. +Moreover all have a price ! Node can have many type of production, and each kind of production has its unit cost. Node can have also many consumptions with specific unavailability cost. Links between node have also max capacity and cost. Network adequacy is not simple. ## Hadar -Hadar compute adequacy from simple to complex network. For example, to compute above network, just few line need: +Hadar computes adequacy from simple to complex network. For example, to compute above network, just few lines need: + ``` python -from hadar.solver.input import * -from hadar.solver.study import solve +import hadar as hd + +study = hd.Study(horizon=3)\ + .network()\ + .node('a')\ + .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\ + .production(cost=10, quantity=[30, 20, 10], name='prod')\ + .node('b')\ + .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\ + .production(cost=10, quantity=[10, 20, 30], name='prod')\ + .link(src='a', dest='b', quantity=[10, 10, 10], cost=2)\ + .link(src='b', dest='a', quantity=[10, 10, 10], cost=2)\ + .build() + +optimizer = hd.LPOptimizer() +res = optimizer.solve(study) +``` -study = Study(['a', 'b']) \ - .add_on_node('a', data=Consumption(cost=10 ** 6, quantity=[20], type='load')) \ - .add_on_node('a', data=Production(cost=10, quantity=[30], type='prod')) \ - .add_on_node('b', data=Consumption(cost=10 ** 6, quantity=[20], type='load')) \ - .add_on_node('b', data=Production(cost=20, quantity=[10], type='prod')) \ - .add_border(src='a', dest='b', quantity=[10], cost=2) \ +And few more lines to display graphics results. -res = solve(study) +```python +plot = hd.HTMLPlotting(agg=hd.ResultAnalyzer(study, res), + node_coord={'a': [2.33, 48.86], 'b': [4.38, 50.83]}) +plot.network().node('a').stack() +plot.network().map(t=0, zoom=2.5) ``` + +Get more information and examples at [https://www.hadar-simulator.org/](https://www.hadar-simulator.org/) \ No newline at end of file diff --git a/docs/requirements.txt b/docs/requirements.txt index f6a154f..6603dac 100644 --- a/docs/requirements.txt +++ b/docs/requirements.txt @@ -5,6 +5,7 @@ plotly jupyter matplotlib requests +progress sphinx sphinx-rtd-theme sphinx-autobuild \ No newline at end of file diff --git a/docs/source/architecture/analyzer.rst b/docs/source/architecture/analyzer.rst index 578942d..c60c77c 100644 --- a/docs/source/architecture/analyzer.rst +++ b/docs/source/architecture/analyzer.rst @@ -12,7 +12,7 @@ Today, there is only :code:`ResultAnalyzer`, with two features level: Before speaking about this features, let's see how data are transformed. Flatten Data ---------- +------------ As said above, object is nice to encapsulate data and represent it into agnostic form. Objects can be serialized into JSON or something else to be used by another software maybe in another language. But keep object to analyze data is awful. @@ -77,14 +77,14 @@ Link follow the same pattern. Hierarchical structure naming change. There are no +------+------+------+------+------+------+------+ | 10 | 100 | 81 | fr | uk | 1 | 1 | +------+------+------+------+------+------+------+ -| ... | ... | ... | ... | ... | .. | ... | +| ... | ... | ... | ... | ... | .. | .. | +------+------+------+------+------+------+------+ It's done by :code:`_build_link(study: Study, result: Result) -> pd.Dataframe` method. -Low level analysis ------------------- +Low level analysis power with a *FluentAPISelector* +--------------------------------------------------- When you observe flat data, there are two kind of data. *Content* like cost, given, asked and *index* describes by node, name, scn, t. @@ -114,23 +114,29 @@ If first index like node and scenario has only one element, there are removed. This result can be done by this line of code. :: agg = hd.ResultAnalyzer(study, result) - df = agg.agg_prod(agg.inode['fr'], agg.scn[0], agg.itime[50:60], agg.iname) + df = agg.network().node('fr').scn(0).time(slice(50, 60)).production() -As you can see, user select index hierarchy by sorting :code:`agg.ixxx` . Then user specify filter by :code:`agg.ixxx[yy]`. +For analyzer, Fluent API respect these rules: -Behind this mechanism, there are :code:`Index` objects. As you can see directly in the code :: +* API flow begin by :code:`network()` + +* API flow must contain strictly one of :code:`node()` , :code:`time()`, :code:`scn()` element + +* API flow must contain only one of element inside :code:`link()` , :code:`production()` , :code:`consumption()` - @property - def inode(self) -> NodeIndex: - """ - Get a node index to specify node slice to aggregate consumption or production. +* Except for :code:`network()`, API has no order. Order is free for user to give hierarchy data. - :return: new instance of NodeIndex() - """ - return NodeIndex() +* Therefore above rules, API will always be 5 elements length. +Behind this mechanism, there are :code:`Index` objects. As you can see directly in the code :: + + ... + self.consumption = lambda x=None: self._append(ConsIndex(x)) + ... + self.time = lambda x=None: self._append(TimeIndex(x)) + ... -Each kind of index has to inherent from this class. :code:`Index` object encapsulate column metadata to use and range of filtered elements to keep (accessible by overriding :code:`__getitem__` method). Then, Hadar has child classes with good parameters : :code:`NameIndex` , :code:`NodeIndex` , :code:`ScnIndex` , :code:`TimeIndex` , :code:`SrcIndex` , :code:`DestIndex` . For example you can find below :code:`NodeIndex` implementation :: +Each kind of index has to inherent from this class. :code:`Index` object encapsulate column metadata to use and range of filtered elements to keep (accessible by overriding :code:`__getitem__` method). Then, Hadar has child classes with good parameters : :code:`ConsIndex` , :code:`ProdIndex` , :code:`NodeIndex` , :code:`ScnIndex` , :code:`TimeIndex` , :code:`LinkIndex` , :code:`DestIndex` . For example you can find below :code:`NodeIndex` implementation :: class NodeIndex(Index[str]): """Index implementation to filter nodes""" @@ -139,7 +145,9 @@ Each kind of index has to inherent from this class. :code:`Index` object encapsu .. image:: /_static/architecture/analyzer/ulm-index.png -Index instantiation are completely hidden for user. It created implicitly when user types :code:`agg.ixxx[yy]`. Then, hadar will + + +Index instantiation are completely hidden for user. Then, hadar will #. check that mandatory indexes are given with :code:`_assert_index` method. diff --git a/docs/source/architecture/optimizer.rst b/docs/source/architecture/optimizer.rst index 209c4e8..3688640 100644 --- a/docs/source/architecture/optimizer.rst +++ b/docs/source/architecture/optimizer.rst @@ -146,22 +146,37 @@ Study Most important attribute could be :code:`quantity` which represent quantity of power used in network. For link, is a transfert capacity. For production is a generation capacity. For consumption is a forced load to sustain. -User can construct Study step by step thanks to a *fluent API* :: +Fluent API Selector +******************* - import hadar as hd +User can construct Study step by step thanks to a *Fluent API* Selector :: - study = hd.Study(['a', 'b'], horizon=3) \ - .add_on_node('a', data=hd.Consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')) \ - .add_on_node('a', data=hd.Production(cost=10, quantity=[30, 20, 10], name='prod')) \ - .add_on_node('b', data=hd.Consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')) \ - .add_on_node('b', data=hd.Production(cost=20, quantity=[10, 20, 30], name='prod')) \ - .add_link(src='a', dest='b', quantity=[10, 10, 10], cost=2) \ - .add_link(src='b', dest='a', quantity=[10, 10, 10], cost=2) \ + import hadar as hd + study = hd.Study(horizon=3)\ + .network()\ + .node('a')\ + .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\ + .production(cost=10, quantity=[30, 20, 10], name='prod')\ + .node('b')\ + .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\ + .production(cost=10, quantity=[10, 20, 30], name='prod')\ + .link(src='a', dest='b', quantity=[10, 10, 10], cost=2)\ + .link(src='b', dest='a', quantity=[10, 10, 10], cost=2)\ + .build() optim = hd.LPOptimizer() res = optim.solve(study) +In the case of optimizer, *Fluent API Selector* is represented by :code:`NetworkFluentAPISelector` , and +:code:`NodeFluentAPISelector` classes. As you assume with above example, optimizer rules for API Selector are : + +* API flow begin by :code:`network()` and end by :code:`build()` + +* You can only downstream deeper step by step (i.e. :code:`network()` then :code:`node()`, then :code:`consumption()` ) + +* But you can upstream as you want (i.e. go direcly from :code:`consumption()` to :code:`network()` ) + To help user, quantity field is flexible: * lists are converted to numpy array diff --git a/docs/source/architecture/overview.rst b/docs/source/architecture/overview.rst index 9618217..b96218d 100644 --- a/docs/source/architecture/overview.rst +++ b/docs/source/architecture/overview.rst @@ -61,25 +61,28 @@ Scikit-learn is the best example of high abstraction level API. For example, if How many people using this feature know that scikit-learn tries to project data into higher space to find a linear regression inside. And to accelerate computation, it uses mathematics a feature called *a kernel trick* because problem respect strict requirements ? Perhaps just few people and it's all the beauty of an high level API, it hidden background gear. -Hadar tries to keep this high abstraction features. Look at the *Get Started* example :: +Hadar tries to keep this high abstraction features. Look at the `Get Started `_ example :: import hadar as hd - study = hd.Study(['a', 'b'], horizon=3) \ - .add_on_node('a', data=hd.Consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')) \ - .add_on_node('a', data=hd.Production(cost=10, quantity=[30, 20, 10], name='prod')) \ - .add_on_node('b', data=hd.Consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')) \ - .add_on_node('b', data=hd.Production(cost=20, quantity=[10, 20, 30], name='prod')) \ - .add_link(src='a', dest='b', quantity=[10, 10, 10], cost=2) \ - .add_link(src='b', dest='a', quantity=[10, 10, 10], cost=2) \ - - + study = hd.Study(horizon=3)\ + .network()\ + .node('a')\ + .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\ + .production(cost=10, quantity=[30, 20, 10], name='prod')\ + .node('b')\ + .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\ + .production(cost=10, quantity=[10, 20, 30], name='prod')\ + .link(src='a', dest='b', quantity=[10, 10, 10], cost=2)\ + .link(src='b', dest='a', quantity=[10, 10, 10], cost=2)\ + .build() + optim = hd.LPOptimizer() res = optim.solve(study) - Create a study like you will draw it on a paper. Put your nodes, attach some production, consumption, link and run optimizer. +Optimizer, Analayzer and Viewer parts are build around the same API called inside code *Fluent API Selector*. Each part has its flavours. Go Next ------- diff --git a/docs/source/architecture/viewer.rst b/docs/source/architecture/viewer.rst index cd22ab2..3488be1 100644 --- a/docs/source/architecture/viewer.rst +++ b/docs/source/architecture/viewer.rst @@ -5,4 +5,24 @@ Even with the highest level analyzer features. Data remains simple matrix or tab Viewer use Analyzer API to build plots. It like an extract layer to convert numeric result to visual result. -There are many viewers, all inherent from :code:`ABCPlotting` abstract class. Available plots are identical between viewers, only technologies used to build these plots change. Today, we have one type of plotting :code:`HTMLPlotting` which is coded upon plotly library to build html interactive plots. +Viewer is split in two domains. First part implements the *FluentAPISelector*, use ResultAnalyzer to compute result and perform last compute before display graphics. This behaviour are coded inside all :code:`*FluentAPISelector` classes. + +These classes are directly used by user when asking for a graphics :: + + plot = ... + plot.network().node('fr').consumption('load').gaussian(t=4) + plot.network().map(t=0, scn=0) + plot.network().node('de').stack(scn=7) + +For Viewer, Fluent API has these rules: + +* API begins by :code:`network`. + +* User can only go downstream step by step into data. He must specify element choice at each step. + +* When he reaches wanted scope (network, node, production, etc), he can call graphics available for the current scope. + + +Second part belonging to Viewer is only for plotting. Hadar can handle many different libraries and technologies for plotting. New plotting has just to implement :code:`ABCPlotting` and :code:`ABCElementPlotting` . Today one HTML implementation exist with plotly library inside :code:`HTMLPlotting` and :code:`HTMLElementPlotting`. + +Data send to plotting classes are complete, pre-computed and ready to display. \ No newline at end of file diff --git a/docs/source/conf.py b/docs/source/conf.py index a0b8b92..275a0a1 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -23,7 +23,7 @@ author = 'RTE' # The full version, including alpha/beta/rc tags -release = '0.1.0' +release = hadar.__version__ # -- General configuration --------------------------------------------------- @@ -59,4 +59,4 @@ nbsphinx_execute = 'never' -autodoc_mock_imports = ['pandas', 'numpy', 'ortools', 'plotly', 'jupyter', 'matplotlib', 'requests'] \ No newline at end of file +autodoc_mock_imports = ['pandas', 'numpy', 'ortools', 'plotly', 'jupyter', 'matplotlib', 'requests', 'progress'] \ No newline at end of file diff --git a/docs/source/dev-guide/contributing.rst b/docs/source/dev-guide/contributing.rst index 412d565..2c2454d 100644 --- a/docs/source/dev-guide/contributing.rst +++ b/docs/source/dev-guide/contributing.rst @@ -1,5 +1,5 @@ How to Contribute -================ +================= First off, thank you to considering contributing to Hadar. We believe technology can change the world. But only great community and open source can improve the world. @@ -24,7 +24,7 @@ You can participate on Hadar from many ways: **Issue tracker are only for features, bug or improvment; not for support. If you have some question please go to TODO . Any support issue will be closed.** Feature / Improvement --------------------- +--------------------- Little changes can be directly send into a pull request. Like : diff --git a/docs/source/dev-guide/repository.rst b/docs/source/dev-guide/repository.rst index cf5081e..7577429 100644 --- a/docs/source/dev-guide/repository.rst +++ b/docs/source/dev-guide/repository.rst @@ -14,7 +14,7 @@ Hadar `repository `_ is split in many parts. * :code:`.github/` github configuration to use Github Action for CI. Ticketing ------- +--------- We use all github features to organize development. We implement a Agile methodology and try to recreate Jira behavior in github. Therefore we swap Jira features to Github such as : diff --git a/docs/source/mathematics/linear-model.rst b/docs/source/mathematics/linear-model.rst index 1a1a961..313fb6d 100644 --- a/docs/source/mathematics/linear-model.rst +++ b/docs/source/mathematics/linear-model.rst @@ -91,7 +91,7 @@ Then productions and edges need to be bounded Lack of adequacy --------------- +---------------- Variables ********* @@ -116,7 +116,7 @@ Objective has a new term \end{array} Constraints -********** +*********** Kirschhoff law needs an update too. Lost of Load is represented like a *fantom* import of energy to reach adequacy. diff --git a/docs/source/reference/hadar.viewer.rst b/docs/source/reference/hadar.viewer.rst index 02e06d7..8abde1f 100644 --- a/docs/source/reference/hadar.viewer.rst +++ b/docs/source/reference/hadar.viewer.rst @@ -20,14 +20,6 @@ hadar.viewer.html module :undoc-members: :show-inheritance: -hadar.viewer.jupyter module ---------------------------- - -.. automodule:: hadar.viewer.jupyter - :members: - :undoc-members: - :show-inheritance: - Module contents --------------- diff --git a/examples/Analyze Result/Analyze Result.ipynb b/examples/Analyze Result/Analyze Result.ipynb index 645e2eb..b56b70a 100644 --- a/examples/Analyze Result/Analyze Result.ipynb +++ b/examples/Analyze Result/Analyze Result.ipynb @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:bad3790f9fda89268daf171c407712f66a5d4e75cd9ed5645046602e2068cdd8 -size 165584 +oid sha256:dacfe9b0d62bca5b684a9a11e6df484dce7aa1bb8c3a9a4b643bacbf920c7349 +size 3673391 diff --git a/examples/Begin Stochastic/Begin Stochastic.ipynb b/examples/Begin Stochastic/Begin Stochastic.ipynb index 810f723..78aaf57 100644 --- a/examples/Begin Stochastic/Begin Stochastic.ipynb +++ b/examples/Begin Stochastic/Begin Stochastic.ipynb @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:63dac9e2a0f7595429880c88b4c100e5b2895a7d7a2894f99e7fdc52037da054 -size 785660 +oid sha256:c69502d2df96e517fce85c8ed75f70f1c2486f112b529d4aa586e8cc8f972504 +size 4293528 diff --git a/examples/Cost and Prioritization/Cost and Prioritization.ipynb b/examples/Cost and Prioritization/Cost and Prioritization.ipynb index 6cdc28f..6dc3f5c 100644 --- a/examples/Cost and Prioritization/Cost and Prioritization.ipynb +++ b/examples/Cost and Prioritization/Cost and Prioritization.ipynb @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:04b6a888975fdc18c8975e110c00232dc30cfdb5981d546328ace3fe0a0c7f39 -size 3720900 +oid sha256:79288007849b1cb8c1eaddf923ac42ffbca6bafb4b35298a49480da7c11db981 +size 3721496 diff --git a/examples/FR-DE Adequacy/FR-DE Adequacy.ipynb b/examples/FR-DE Adequacy/FR-DE Adequacy.ipynb index ac5d78d..16b122d 100644 --- a/examples/FR-DE Adequacy/FR-DE Adequacy.ipynb +++ b/examples/FR-DE Adequacy/FR-DE Adequacy.ipynb @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:09fcd23634252ed9a8af5d462bde6407f15ec0a0d04e3584f3d70a2355a628c4 -size 7210221 +oid sha256:f2e011faf04ebe297d3e034e905533eb5f033306271f607c6a9cd08b3710716a +size 7213321 diff --git a/examples/Get Started/Get Started.ipynb b/examples/Get Started/Get Started.ipynb index f5a0d90..7b55791 100644 --- a/examples/Get Started/Get Started.ipynb +++ b/examples/Get Started/Get Started.ipynb @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1709763186983ae7e53e63623559f7385f95805f98215004664d261e73672fb9 -size 3705974 +oid sha256:d7ba7667160ba07167f618179d0cb11709b98a0fc850f5b9b17067646e3b62c2 +size 3706311 diff --git a/examples/Network Investment/Network Investment.ipynb b/examples/Network Investment/Network Investment.ipynb index 632840a..3b8d1cb 100644 --- a/examples/Network Investment/Network Investment.ipynb +++ b/examples/Network Investment/Network Investment.ipynb @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a5d3bb7b2495e3d75bfacb2b4c85c934c1c669ffe3b35608bb1392dc8152d2da -size 8580918 +oid sha256:78d3c267f9649cfa734c2b412387747861f17822060c3253910b924fa4af176c +size 8669490 diff --git a/examples/Worflow Advenced/Workflow Advenced.ipynb b/examples/Worflow Advenced/Workflow Advenced.ipynb index 4166eda..af175db 100644 --- a/examples/Worflow Advenced/Workflow Advenced.ipynb +++ b/examples/Worflow Advenced/Workflow Advenced.ipynb @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:babc67b4bf395c458cc78c0b8e228f46e821a7b6bc414766be74d17478eaa1ea -size 3679100 +oid sha256:ddc3ed0991a77d3977af8df544ac4141646076fc36ccbc17b7eb6e8c941c84d0 +size 3679261 diff --git a/examples/Workflow/Workflow.ipynb b/examples/Workflow/Workflow.ipynb index d9d9d8a..709fda3 100644 --- a/examples/Workflow/Workflow.ipynb +++ b/examples/Workflow/Workflow.ipynb @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:84a58202970a1bf1dc3aab2bb70fd64c3ea0264adb69dc76c9dae98b259a734d -size 4046391 +oid sha256:3a6819280b7f4d679fa88716e078fc9453bebcfe30cc13a6870386193109e5a2 +size 4046483 diff --git a/hadar/__init__.py b/hadar/__init__.py index 41a51b0..71a5a6d 100644 --- a/hadar/__init__.py +++ b/hadar/__init__.py @@ -17,7 +17,7 @@ from .viewer.html import HTMLPlotting from .analyzer.result import ResultAnalyzer -__version__ = '0.3.0' +__version__ = '0.3.1' level = os.getenv('HADAR_LOG', 'WARNING') diff --git a/hadar/analyzer/result.py b/hadar/analyzer/result.py index a474d34..342c373 100644 --- a/hadar/analyzer/result.py +++ b/hadar/analyzer/result.py @@ -4,8 +4,8 @@ # If a copy of the Apache License, version 2.0 was not distributed with this file, you can obtain one at http://www.apache.org/licenses/LICENSE-2.0. # SPDX-License-Identifier: Apache-2.0 # This file is part of hadar-simulator, a python adequacy library for everyone. - -from typing import Union, TypeVar, List, Generic, Type +from functools import reduce +from typing import Union, TypeVar, List, Generic, Type, Any, Dict import pandas as pd import numpy as np @@ -13,7 +13,7 @@ from hadar.optimizer.output import Result, OutputNode from hadar.optimizer.input import Study -__all__ = ['ResultAnalyzer'] +__all__ = ['ResultAnalyzer', 'FluentAPISelector'] T = TypeVar('T') @@ -23,28 +23,25 @@ class Index(Generic[T]): """ Generic Index to use to select and rank data. """ - def __init__(self, column): + def __init__(self, column, index=None): """ Initiate instance. :param column: column name link to this index :param index: list of index or element to filter from data. None by default to say keep all data. """ - self.all = True self.column = column - - def __getitem__(self, index): - if isinstance(index, list): - index = tuple(index) - if not isinstance(index, tuple): - index = tuple([index]) - - if len(index) == 0: + if index is None: self.all = True + elif isinstance(index, list): + self.index = tuple(index) + self.all = len(index) == 0 + elif not isinstance(index, tuple): + self.index = tuple([index]) + self.all = False else: self.index = index self.all = False - return self def filter(self, df: pd.DataFrame) -> pd.Series: """ @@ -66,33 +63,33 @@ def is_alone(self) -> bool: return not self.all and len(self.index) <= 1 -class NodeIndex(Index[str]): - """Index implementation to filter nodes""" - def __init__(self): - Index.__init__(self, column='node') +class ProdIndex(Index[str]): + """Index implementation to filter productions""" + def __init__(self, index): + Index.__init__(self, column='name', index=index) -class SrcIndex(Index[str]): - """Index implementation to filter src node""" - def __init__(self): - Index.__init__(self, column='src') +class ConsIndex(Index[str]): + """ Index implementation to filter consumptions""" + def __init__(self, index): + Index.__init__(self, column='name', index=index) -class DestIndex(Index[str]): +class LinkIndex(Index[str]): """Index implementation to filter destination node""" - def __init__(self): - Index.__init__(self, column='dest') + def __init__(self, index): + Index.__init__(self, column='dest', index=index) -class NameIndex(Index[str]): +class NodeIndex(Index[str]): """Index implementation to filter name of elements""" - def __init__(self): - Index.__init__(self, column='name') + def __init__(self, index): + Index.__init__(self, column='node', index=index) class IntIndex(Index[int]): """Index implementation to handle int index with slice""" - def __init__(self, column: str): + def __init__(self, column: str, index): """ Create instance. @@ -100,24 +97,24 @@ def __init__(self, column: str): :param start: start datetime to filter (to use instead of index) :param end: end datetime to filter (to use instead of index) """ - Index.__init__(self, column=column) - - def __getitem__(self, index): if isinstance(index, slice): - index = tuple(range(index.start, index.stop, index.step if index.step else 1)) - return Index.__getitem__(self, index) + start = 0 if index.start is None else index.start + stop = -1 if index.start is None else index.stop + step = 1 if index.step is None else index.step + index = tuple(range(start, stop, step)) + Index.__init__(self, column=column, index=index) class TimeIndex(IntIndex): """Index implementation to filter by time step""" - def __init__(self): - IntIndex.__init__(self, column='t') + def __init__(self, index): + IntIndex.__init__(self, column='t', index=index) class ScnIndex(IntIndex): """index implementation to filter by scenario""" - def __init__(self): - IntIndex.__init__(self, column='scn') + def __init__(self, index): + IntIndex.__init__(self, column='scn', index=index) class ResultAnalyzer: @@ -136,7 +133,7 @@ def __init__(self, study: Study, result: Result): self.consumption = ResultAnalyzer._build_consumption(self.study, self.result) self.production = ResultAnalyzer._build_production(self.study, self.result) - self.link = ResultAnalyzer.link(self.study, self.result) + self.link = ResultAnalyzer._build_link(self.study, self.result) @staticmethod def _build_consumption(study: Study, result: Result): @@ -197,16 +194,16 @@ def _build_production(study: Study, result: Result): return prod @staticmethod - def link(study: Study, result: Result): + def _build_link(study: Study, result: Result): """ Flat all data to build global link dataframe - columns: | cost | avail | used | src | dest | t | + columns: | cost | avail | used | node | dest | t | """ h = study.horizon scn = study.nb_scn s = h * scn * sum([len(n.links) for n in result.nodes.values()]) link = {'cost': np.empty(s), 'avail': np.empty(s), 'used': np.empty(s), - 'src': np.empty(s), 'dest': np.empty(s), 't': np.empty(s), 'scn': np.empty(s)} + 'node': np.empty(s), 'dest': np.empty(s), 't': np.empty(s), 'scn': np.empty(s)} link = pd.DataFrame(data=link) n_link = 0 @@ -215,7 +212,7 @@ def link(study: Study, result: Result): slices = link.index[n_link * h * scn: (n_link + 1) * h * scn] link.loc[slices, 'cost'] = c.cost link.loc[slices, 'dest'] = c.dest - link.loc[slices, 'src'] = name + link.loc[slices, 'node'] = name link.loc[slices, 'avail'] = study.nodes[name].links[i].quantity.flatten() link.loc[slices, 'used'] = c.quantity.flatten() link.loc[slices, 't'] = np.tile(np.arange(h), scn) @@ -242,88 +239,60 @@ def _remove_useless_index_level(df: pd.DataFrame, indexes: List[Index]) -> pd.Da return df @staticmethod - def _pivot(i0: Index, i1: Index, i2: Index, i3: Index, df: pd.DataFrame) -> pd.DataFrame: + def _pivot(indexes, df: pd.DataFrame) -> pd.DataFrame: """ Pivot table by appling filter and index hirarchy asked by indexes. - :param i0: first level index - :param i1: second level index - :param i2: third level index - :param i3: fourth level index - :param df: dataframe to pivot + :param names: list of index :return: pivot table """ - indexes = [i0.column, i1.column, i2.column, i3.column] - pt = pd.pivot_table(data=df[i0.filter(df) & i1.filter(df) & i2.filter(df) & i3.filter(df)], - index=indexes, aggfunc=lambda x: x.iloc[0]) + names = [i.column for i in indexes] + filtered = reduce(lambda a, b: a & b, (i.filter(df) for i in indexes)) + pt = pd.pivot_table(data=df[filtered], index=names, aggfunc=lambda x: x.iloc[0]) - return ResultAnalyzer._remove_useless_index_level(df=pt, indexes=[i0, i1, i2, i3]) + return ResultAnalyzer._remove_useless_index_level(df=pt, indexes=indexes) @staticmethod - def _assert_index(i0: Index, i1: Index, i2: Index, i3: Index, type: Type): + def check_index(indexes: List[Index], type: Type): """ - Check indexes cohesion. Raise ValueError exception if indexes are wrong. - - :param i0: first level index - :param i1: second level index - :param i2: third level index - :param i3: fourth level index - :param type: type to check inside index - :return: + Check indexes cohesion + :param indexes: list fo indexes + :param type: Index type to check inside list + :return: true if at least one type is in list False else """ - if not (isinstance(i0, type) or isinstance(i1, type) or isinstance(i2, type) or isinstance(i3, type)): - raise ValueError('Indexes must contain a {}'.format(type.__class__.__name__)) + return any(isinstance(i, type) for i in indexes) - def agg_cons(self, i0: Index, i1: Index, i2: Index, i3: Index) -> pd.DataFrame: + @staticmethod + def _assert_index(indexes: List[Index], type: Type): """ - Aggregate consumption according to index level and filter. + Check indexes cohesion. Raise Value Error if not - :param i0: first level index. Index type must be [NodeIndex, NameIndex, TimeIndex, ScnIndex]] - :param i1: second level index. Index type must be [NodeIndex, NameIndex, TimeIndex, ScnIndex]] - :param i2: third level index. Index type must be [NodeIndex, NameIndex, TimeIndex, ScnIndex]] - :param i3 fourth level index. Index type must be [NodeIndex, NameIndex, TimeIndex, ScnIndex] - :return: dataframe with hierarchical and filter index level asked + :param indexes: list fo indexes + :param type: Index type to check inside list + :return: true if at least one type is in list False else """ - ResultAnalyzer._assert_index(i0, i1, i2, i3, TimeIndex) - ResultAnalyzer._assert_index(i0, i1, i2, i3, NodeIndex) - ResultAnalyzer._assert_index(i0, i1, i2, i3, NameIndex) - ResultAnalyzer._assert_index(i0, i1, i2, i3, ScnIndex) - - return ResultAnalyzer._pivot(i0, i1, i2, i3, self.consumption) + if not ResultAnalyzer.check_index(indexes, type): + raise ValueError('Indexes must contain a {}'.format(type.__class__.__name__)) - def agg_prod(self, i0: Index, i1: Index, i2: Index, i3: Index) -> pd.DataFrame: + def start(self, indexes: List[Index]) -> pd.DataFrame: """ - Aggregate production according to index level and filter. - - :param i0: first level index. Index type must be [NodeIndex, NameIndex, TimeIndex, ScnIndex]] - :param i1: second level index. Index type must be [NodeIndex, NameIndex, TimeIndex, ScnIndex]] - :param i2: third level index. Index type must be [NodeIndex, NameIndex, TimeIndex, ScnIndex]] - :param i3 fourth level index. Index type must be [NodeIndex, NameIndex, TimeIndex, ScnIndex] - :return: dataframe with hierarchical and filter index level asked + Aggregate according to index level and filter. """ - ResultAnalyzer._assert_index(i0, i1, i2, i3, TimeIndex) - ResultAnalyzer._assert_index(i0, i1, i2, i3, NodeIndex) - ResultAnalyzer._assert_index(i0, i1, i2, i3, NameIndex) - ResultAnalyzer._assert_index(i0, i1, i2, i3, ScnIndex) + ResultAnalyzer._assert_index(indexes, TimeIndex) + ResultAnalyzer._assert_index(indexes, NodeIndex) + ResultAnalyzer._assert_index(indexes, ScnIndex) - return ResultAnalyzer._pivot(i0, i1, i2, i3, self.production) + if ResultAnalyzer.check_index(indexes, ConsIndex): + return ResultAnalyzer._pivot(indexes, self.consumption) - def agg_link(self, i0: Index, i1: Index, i2: Index, i3: Index) -> pd.DataFrame: - """ - Aggregate link according to index level and filter. + if ResultAnalyzer.check_index(indexes, ProdIndex): + return ResultAnalyzer._pivot(indexes, self.production) - :param i0: first level index. Index type must be [DestIndex, SrcIndex, TimeIndex, ScnIndex] - :param i1: second level index. Index type must be [DestIndex, SrcIndex, TimeIndex, ScnIndex] - :param i2: third level index. Index type must be [DestIndex, SrcIndex, TimeIndex, ScnIndex] - :param i3 fourth level index. Index type must be [DestIndex, ScrIndex, TimeIndex, ScnIndex] - :return: dataframe with hierarchical and filter index level asked - """ - ResultAnalyzer._assert_index(i0, i1, i2, i3, TimeIndex) - ResultAnalyzer._assert_index(i0, i1, i2, i3, SrcIndex) - ResultAnalyzer._assert_index(i0, i1, i2, i3, DestIndex) - ResultAnalyzer._assert_index(i0, i1, i2, i3, ScnIndex) + if ResultAnalyzer.check_index(indexes, LinkIndex): + return ResultAnalyzer._pivot(indexes, self.link) - return ResultAnalyzer._pivot(i0, i1, i2, i3, self.link) + def network(self): + return FluentAPISelector([], self) def get_elements_inside(self, node: str): """ @@ -349,32 +318,43 @@ def get_balance(self, node: str) -> np.ndarray: if im.size > 0: balance += -im['used'].values.reshape(self.nb_scn, self.horizon) - exp = pd.pivot_table(self.link[self.link['src'] == node][['used', 'scn', 't']], index=['scn', 't'], aggfunc=np.sum) + exp = pd.pivot_table(self.link[self.link['node'] == node][['used', 'scn', 't']], index=['scn', 't'], aggfunc=np.sum) if exp.size > 0: balance += exp['used'].values.reshape(self.nb_scn, self.horizon) return balance def get_cost(self, node: str) -> np.ndarray: + """ + Compute adequacy cost on a node. + + :param node: node name + :return: matrix (scn, time) + """ cost = np.zeros((self.nb_scn, self.horizon)) c, p, b = self.get_elements_inside(node) if c: - cons = self.agg_cons(self.inode[node], self.iscn, self.itime, self.iname) - cost += ((cons['asked'] - cons['given'])*cons['cost']).groupby(axis=0, level=(0, 1))\ + cons = self.network().node(node).scn().time().consumption() + cost += ((cons['asked'] - cons['given']) * cons['cost']).groupby(axis=0, level=(0, 1)) \ .sum().sort_index(level=(0, 1)).values.reshape(self.nb_scn, self.horizon) if p: - prod = self.agg_prod(self.inode[node], self.iscn, self.itime, self.iname) - cost += (prod['used']*prod['cost']).groupby(axis=0, level=(0, 1))\ + prod = self.network().node(node).scn().time().production() + cost += (prod['used'] * prod['cost']).groupby(axis=0, level=(0, 1)) \ .sum().sort_index(level=(0, 1)).values.reshape(self.nb_scn, self.horizon) if b: - link = self.agg_link(self.isrc[node], self.iscn, self.itime, self.idest) - cost += (link['used']*link['cost']).groupby(axis=0, level=(0, 1))\ + link = self.network().node(node).scn().time().link() + cost += (link['used'] * link['cost']).groupby(axis=0, level=(0, 1)) \ .sum().sort_index(level=(0, 1)).values.reshape(self.nb_scn, self.horizon) return cost def get_rac(self) -> np.ndarray: + """ + Compute Remain Availabale Capacities on network. + + :return: matrix (scn, time) + """ prod_used = self.production\ .drop(['avail', 'cost'], axis=1)\ .pivot_table(index='scn', columns='t', aggfunc=np.sum)\ @@ -426,56 +406,54 @@ def nodes(self) -> List[str]: """ return self.result.nodes.keys() - @property - def inode(self) -> NodeIndex: - """ - Get a node index to specify node slice to aggregate consumption or production. - - :return: new instance of NodeIndex() - """ - return NodeIndex() - @property - def iname(self) -> NameIndex: - """ - Get a name index to specify name slice to aggregate consumption or production. +class FluentAPISelector: + """ + Fluent Api Selector for Analyzer. - :return: new instance of NameIndex() - """ - return NameIndex() + User can join network, node, consumption, production, link, time, scn to create filter and organize hierarchy. + Join can me in any order, except: + - join begin by network + - join is unique only one element of node, time, scn are expected for each query + - production, consumption and link are excluded themself, only on of them are expected for each query + """ + def __init__(self, indexes: List[Index], analyzer: ResultAnalyzer): + self.indexes = indexes + self.analyzer = analyzer - @property - def isrc(self) -> SrcIndex: - """ - Get a source index to specify source slice to aggregate link. + if not ResultAnalyzer.check_index(indexes, ConsIndex) \ + and not ResultAnalyzer.check_index(indexes, ProdIndex) \ + and not ResultAnalyzer.check_index(indexes, LinkIndex): + self.consumption = lambda x=None: self._append(ConsIndex(x)) - :return: new instance of SrcIndex() - """ - return SrcIndex() + if not ResultAnalyzer.check_index(indexes, ProdIndex) \ + and not ResultAnalyzer.check_index(indexes, ConsIndex) \ + and not ResultAnalyzer.check_index(indexes, LinkIndex): + self.production = lambda x=None: self._append(ProdIndex(x)) - @property - def idest(self) -> DestIndex: - """ - Get a destination index to specify destination slice to aggregate link. + if not ResultAnalyzer.check_index(indexes, LinkIndex) \ + and not ResultAnalyzer.check_index(indexes, ConsIndex) \ + and not ResultAnalyzer.check_index(indexes, ProdIndex): + self.link = lambda x=None: self._append(LinkIndex(x)) - :return: new instance of DestIndex() - """ - return DestIndex() + if not ResultAnalyzer.check_index(indexes, NodeIndex): + self.node = lambda x=None: self._append(NodeIndex(x)) - @property - def itime(self) -> TimeIndex: - """ - Get a time index to specify time slice to aggregate consumption, production or link. + if not ResultAnalyzer.check_index(indexes, TimeIndex): + self.time = lambda x=None: self._append(TimeIndex(x)) - :return: new instance of TimeIndex() - """ - return TimeIndex() + if not ResultAnalyzer.check_index(indexes, ScnIndex): + self.scn = lambda x=None: self._append(ScnIndex(x)) - @property - def iscn(self) -> ScnIndex: + def _append(self, index: Index): """ - Get a scenario index to specify scenario slice to aggregate consumption, production or link. + Decide what to do between finish query and start analyze or resume query - :return: new instance of ScnIndex() + :param index: + :return: """ - return ScnIndex() + self.indexes.append(index) + if len(self.indexes) == 4: + return self.analyzer.start(self.indexes) + else: + return FluentAPISelector(self.indexes, self.analyzer) \ No newline at end of file diff --git a/hadar/optimizer/input.py b/hadar/optimizer/input.py index bf7481c..0f4d4c0 100644 --- a/hadar/optimizer/input.py +++ b/hadar/optimizer/input.py @@ -5,12 +5,12 @@ # SPDX-License-Identifier: Apache-2.0 # This file is part of hadar-simulator, a python adequacy library for everyone. -from typing import List, Union +from typing import List, Union, Dict import numpy as np -__all__ = ['Consumption', 'Link', 'Production', 'InputNode', 'Study'] +__all__ = ['Consumption', 'Link', 'Production', 'InputNode', 'Study', 'NetworkFluentAPISelector', 'NodeFluentAPISelector'] class DTO: @@ -104,44 +104,25 @@ class Study(DTO): Main object to facilitate to build a study """ - def __init__(self, node_names: List[str], horizon: int, nb_scn: int = 1): + def __init__(self, horizon: int, nb_scn: int = 1): """ Instance study. - :param node_names: list of node names inside network. :param horizon: simulation time horizon (i.e. number of time step in simulation) :param nb_scn: number of scenarios in study. Default is 1. """ - if len(node_names) > len(set(node_names)): - raise ValueError('some nodes are not unique') - self._nodes = {name: InputNode(consumptions=[], productions=[], links=[]) for name in node_names} + self.nodes = dict() self.horizon = horizon self.nb_scn = nb_scn - - @property - def nodes(self): - return self._nodes - - def add_on_node(self, node: str, data=Union[Production, Consumption, Link]): + def network(self): """ - Attach a production or consumption into a node. + Entry point to create study with the fluent api. - :param node: node name to attach - :param data: consumption or production to attach :return: """ - if node not in self._nodes.keys(): - raise ValueError('Node "{}" is not in available nodes'.format(node)) - - if isinstance(data, Production): - self._add_production(node, data) - - elif isinstance(data, Consumption): - self._add_consumption(node, data) - - return self + return NetworkFluentAPISelector(study=self) def add_link(self, src: str, dest: str, cost: int, quantity: Union[List[float], np.ndarray, float]): """ @@ -155,33 +136,39 @@ def add_link(self, src: str, dest: str, cost: int, quantity: Union[List[float], """ if cost < 0: raise ValueError('link cost must be positive') - if dest not in self._nodes.keys(): + if src not in self.nodes.keys(): + raise ValueError('link source must be a valid node') + if dest not in self.nodes.keys(): raise ValueError('link destination must be a valid node') - if dest in [l.dest for l in self._nodes[src].links]: + if dest in [l.dest for l in self.nodes[src].links]: raise ValueError('link destination must be unique on a node') quantity = self._validate_quantity(quantity) - self._nodes[src].links.append(Link(dest=dest, quantity=quantity, cost=cost)) + self.nodes[src].links.append(Link(dest=dest, quantity=quantity, cost=cost)) return self + def add_node(self, node): + if node not in self.nodes.keys(): + self.nodes[node] = InputNode(consumptions=[], productions=[], links=[]) + def _add_production(self, node: str, prod: Production): if prod.cost < 0: raise ValueError('production cost must be positive') - if prod.name in [p.name for p in self._nodes[node].productions]: + if prod.name in [p.name for p in self.nodes[node].productions]: raise ValueError('production name must be unique on a node') prod.quantity = self._validate_quantity(prod.quantity) - self._nodes[node].productions.append(prod) + self.nodes[node].productions.append(prod) def _add_consumption(self, node: str, cons: Consumption): if cons.cost < 0: raise ValueError('consumption cost must be positive') - if cons.name in [c.name for c in self._nodes[node].consumptions]: + if cons.name in [c.name for c in self.nodes[node].consumptions]: raise ValueError('consumption name must be unique on a node') cons.quantity = self._validate_quantity(cons.quantity) - self._nodes[node].consumptions.append(cons) + self.nodes[node].consumptions.append(cons) def _validate_quantity(self, quantity: Union[List[float], np.ndarray, float]) -> np.ndarray: quantity = np.array(quantity) @@ -211,5 +198,110 @@ def _validate_quantity(self, quantity: Union[List[float], np.ndarray, float]) -> sc_given = 1 if len(quantity.shape) == 1 else quantity.shape[0] raise ValueError('Quantity must be: a number, an array like (horizon, ) or (nb_scn, 1) or (nb_scn, horizon). ' 'In your case horizon specified is %d and actual is %d. ' - 'And nb_scn specified %d is whereas actuel is %d' % + 'And nb_scn specified %d is whereas actual is %d' % (self.horizon, horizon_given, self.nb_scn, sc_given)) + + +class NetworkFluentAPISelector: + """ + Network level of Fluent API Selector. + """ + def __init__(self, study): + self.study = study + self.selector = dict() + + def node(self, name): + """ + Go to node level. + + :param name: node to select when changing level + :return: NodeFluentAPISelector initialized + """ + self.selector['node'] = name + self.study.add_node(name) + return NodeFluentAPISelector(self.study, self.selector) + + def link(self, src: str, dest: str, cost: int, quantity: Union[List, np.ndarray, float]): + """ + Add a link on network. + + :param src: node source + :param dest: node destination + :param cost: unit cost transfer + :param quantity: available capacity + + :return: NetworkAPISelector with new link. + """ + self.study.add_link(src=src, dest=dest, cost=cost, quantity=quantity) + return NetworkFluentAPISelector(self.study) + + def build(self): + """ + Build study. + + :return: return study + """ + return self.study + + +class NodeFluentAPISelector: + """ + Node level of Fluent API Selector + """ + def __init__(self, study, selector): + self.study = study + self.selector = selector + + def consumption(self, name: str, cost: int, quantity: Union[List, np.ndarray, float]): + """ + Add consumption on node. + + :param name: consumption name + :param cost: cost of unsuitability + :param quantity: consumption to sustain + :return: NodeFluentAPISelector with new consumption + """ + self.study._add_consumption(node=self.selector['node'], cons=Consumption(name=name, cost=cost, quantity=quantity)) + return self + + def production(self, name: str, cost: int, quantity: Union[List, np.ndarray, float]): + """ + Add production on node. + + :param name: production name + :param cost: unit cost of use + :param quantity: available capacities + :return: NodeFluentAPISelector with new production + """ + self.study._add_production(node=self.selector['node'], prod=Production(name=name, cost=cost, quantity=quantity)) + return self + + def node(self, name): + """ + Go to different node level. + + :param name: new node level + :return: NodeFluentAPISelector + """ + return NetworkFluentAPISelector(self.study).node(name) + + def link(self, src: str, dest: str, cost: int, quantity: Union[List, np.ndarray, float]): + """ + Add a link on network. + + :param src: node source + :param dest: node destination + :param cost: unit cost transfer + :param quantity: available capacity + + :return: NetworkAPISelector with new link. + """ + return NetworkFluentAPISelector(self.study).link(src=src, dest=dest, cost=cost, quantity=quantity) + + def build(self): + """ + Build study. + + :return: study + """ + return self.study diff --git a/hadar/optimizer/remote/optimizer.py b/hadar/optimizer/remote/optimizer.py index e2ab100..ff0f1b2 100644 --- a/hadar/optimizer/remote/optimizer.py +++ b/hadar/optimizer/remote/optimizer.py @@ -7,8 +7,12 @@ import logging import pickle +import sys +from time import sleep import requests +from progress.bar import Bar +from progress.spinner import Spinner from hadar.optimizer.input import Study from hadar.optimizer.output import Result @@ -17,6 +21,20 @@ logger = logging.getLogger(__name__) +class ServerError(Exception): + def __init__(self, mes: str): + super().__init__(mes) + + +def check_code(code): + if code == 404: + raise ValueError("Can't find server url") + if code == 403: + raise ValueError("Wrong token given") + if code == 500: + raise IOError("Error has occurred on remote server") + + def solve_remote(study: Study, url: str, token: str = 'none') -> Result: """ Send study to remote server. @@ -40,14 +58,35 @@ def _solve_remote_wrap(study: Study, url: str, token: str = 'none', rqt=None) -> :return: result received from server """ # Send study - resp = rqt.post(url=url, data=pickle.dumps(study), params={'token': token}) - if resp.status_code == 404: - raise ValueError("Can't find server url") - if resp.status_code == 403: - raise ValueError("Wrong token given") - if resp.status_code == 500: - raise IOError("Error has occurred on remote server") + resp = rqt.post(url='%s/study' % url, data=pickle.dumps(study), params={'token': token}) + check_code(resp.status_code) + # Deserialize - result = pickle.loads(resp.content) - logging.info("Result received from server") - return result + resp = pickle.loads(resp.content) + id = resp['job'] + + Bar.check_tty = Spinner.check_tty = False + Bar.file = Spinner.file = sys.stdout + bar = Bar('QUEUED', max=resp['progress']) + spinner = None + + while resp['status'] in ['QUEUED', 'COMPUTING']: + resp = rqt.get(url='%s/result/%s' % (url, id), params={'token': token}) + check_code(resp.status_code) + resp = pickle.loads(resp.content) + + if resp['status'] == 'QUEUED': + bar.goto(resp['progress']) + + if resp['status'] == 'COMPUTING': + if spinner is None: + bar.finish() + spinner = Spinner('COMPUTING ') + spinner.next() + + sleep(0.5) + + if resp['status'] == 'ERROR': + raise ServerError(resp['message']) + + return resp['result'] diff --git a/hadar/viewer/abc.py b/hadar/viewer/abc.py index d3774ee..3af8df2 100644 --- a/hadar/viewer/abc.py +++ b/hadar/viewer/abc.py @@ -14,31 +14,81 @@ class ABCElementPlotting(ABC): + """ + Abstract interface to implement to plot graphics + """ @abstractmethod def timeline(self, df: pd.DataFrame, title: str): + """ + Plot timeline with all scenarios. + + :param df: dataframe with scenario on columns and time on index + :param title: title to plot + :return: + """ pass @abstractmethod def monotone(self, y: np.ndarray, title: str): + """ + Plot monotone. + + :param y: value vector + :param title: title to plot + :return: + """ pass @abstractmethod def gaussian(self, rac: np.ndarray, qt: np.ndarray, title: str): + """ + Plot gaussian. + + :param rac: Remain Available Capacities matrix (to plot green or red point) + :param qt: value vector + :param title: title to plot + :return: + """ pass @abstractmethod def stack(self, areas: List[Tuple[str, np.ndarray]], lines: List[Tuple[str, np.ndarray]], title: str): + """ + Plot stack. + + :param areas: list of timelines to stack with area + :param lines: list of timelines to stack with line + :param title: title to plot + :return: + """ pass @abstractmethod def matrix(self, data: np.ndarray, title): + """ + Plot matrix (heatmap) + + :param data: 2D matrix to plot + :param title: title to plot + :return: + """ pass def map_exchange(self, nodes, lines, limit, title, zoom): + """ + Plot map with exchanges as arrow. + + :param nodes: node to set on map + :param lines: arrow to se on map + :param limit: colorscale limit to use + :param title: title to plot + :param zoom: zoom to set on map + :return: + """ pass -class Element(ABC): +class FluentAPISelector(ABC): def __init__(self, plotting: ABCElementPlotting, agg: ResultAnalyzer): self.plotting = plotting self.agg = agg @@ -49,141 +99,192 @@ def not_both(t: int, scn: int): raise ValueError('you have to specify time or scenario index but not both') -class ConsumptionElement(Element): +class ConsumptionFluentAPISelector(FluentAPISelector): + """ + Consumption level of fluent api. + """ def __init__(self, plotting: ABCElementPlotting, agg: ResultAnalyzer, name: str, node: str, kind: str): - Element.__init__(self, plotting, agg) + FluentAPISelector.__init__(self, plotting, agg) self.name = name self.node = node self.kind = kind def timeline(self): - cons = self.agg.agg_cons(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.iscn, self.agg.itime)[self.kind] + """ + Plot timeline graphics. + :return: + """ + cons = self.agg.network().node(self.node).consumption(self.name).scn().time()[self.kind] title = 'Consumptions %s for %s on node %s' % (self.kind, self.name, self.node) return self.plotting.timeline(cons, title) def monotone(self, t: int = None, scn: int = None): - Element.not_both(t, scn) + """ + Plot monotone graphics. + + :param t: focus on t index + :param scn: focus on scn index if t not given + :return: + """ + FluentAPISelector.not_both(t, scn) if t is not None: - y = self.agg.agg_cons(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.itime[t], self.agg.iscn)[self.kind].values + y = self.agg.network().node(self.node).consumption(self.name).time(t).scn()[self.kind].values title = 'Monotone consumption of %s on node %s at t=%0d' % (self.name, self.node, t) elif scn is not None: - y = self.agg.agg_cons(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.iscn[scn], self.agg.itime)[self.kind].values + y = self.agg.network().node(self.node).consumption(self.name).scn(scn).time()[self.kind].values title = 'Monotone consumption of %s on node %s at scn=%0d' % (self.name, self.node, scn) return self.plotting.monotone(y, title) def gaussian(self, t: int = None, scn: int = None): - Element.not_both(t, scn) + """ + Plot gaussian graphics + + :param t: focus on t index + :param scn: focus on scn index if t not given + :return: + """ + FluentAPISelector.not_both(t, scn) if t is None: - cons = self.agg.agg_cons(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.iscn[scn], self.agg.itime)[self.kind].values + cons = self.agg.network().node(self.node).consumption(self.name).scn(scn).time()[self.kind].values rac = self.agg.get_rac()[scn, :] title = 'Gaussian consumption of %s on node %s at scn=%0d' % (self.name, self.node, scn) elif scn is None: - cons = self.agg.agg_cons(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.itime[t], self.agg.iscn)[self.kind].values + cons = self.agg.network().node(self.node).consumption(self.name).time(t).scn()[self.kind].values rac = self.agg.get_rac()[:, t] title = 'Gaussian consumption of %s on node %s at t=%0d' % (self.name, self.node, t) return self.plotting.gaussian(rac=rac, qt=cons, title=title) -class ProductionElement(Element): +class ProductionFluentAPISelector(FluentAPISelector): + """ + Production level of fluent api + """ def __init__(self, plotting: ABCElementPlotting, agg: ResultAnalyzer, name: str, node: str, kind: str): - Element.__init__(self, plotting, agg) + FluentAPISelector.__init__(self, plotting, agg) self.name = name self.node = node self.kind = kind def timeline(self): - prod = self.agg.agg_prod(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.iscn, self.agg.itime)[self.kind] + """ + Plot timeline graphics. + :return: + """ + prod = self.agg.network().node(self.node).production(self.name).scn().time()[self.kind] title = 'Production %s for %s on node %s' % (self.kind, self.name, self.node) return self.plotting.timeline(prod, title) def monotone(self, t: int = None, scn: int = None): - Element.not_both(t, scn) + """ + Plot monotone graphics. + + :param t: focus on t index + :param scn: focus on scn index if t not given + :return: + """ + FluentAPISelector.not_both(t, scn) if t is not None: - y = self.agg.agg_prod(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.itime[t], self.agg.iscn)[self.kind].values + y = self.agg.network().node(self.node).production(self.name).time(t).scn()[self.kind].values title = 'Monotone production of %s on node %s at t=%0d' % (self.name, self.node, t) elif scn is not None: - y = self.agg.agg_prod(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.iscn[scn], self.agg.itime)[self.kind].values + y = self.agg.network().node(self.node).production(self.name).scn(scn).time()[self.kind].values title = 'Monotone production of %s on node %s at scn=%0d' % (self.name, self.node, scn) return self.plotting.monotone(y, title) def gaussian(self, t: int = None, scn: int = None): - Element.not_both(t, scn) + """ + Plot gaussian graphics + + :param t: focus on t index + :param scn: focus on scn index if t not given + :return: + """ + FluentAPISelector.not_both(t, scn) if t is None: - prod = self.agg.agg_prod(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.iscn[scn], self.agg.itime)[self.kind].values + prod = self.agg.network().node(self.node).production(self.name).scn(scn).time()[self.kind].values rac = self.agg.get_rac()[scn, :] title = 'Gaussian production of %s on node %s at scn=%0d' % (self.name, self.node, scn) elif scn is None: - prod = self.agg.agg_prod(self.agg.inode[self.node], self.agg.iname[self.name], - self.agg.itime[t], self.agg.iscn)[self.kind].values + prod = self.agg.network().node(self.node).production(self.name).time(t).scn()[self.kind].values rac = self.agg.get_rac()[:, t] title = 'Gaussian production of %s on node %s at t=%0d' % (self.name, self.node, t) return self.plotting.gaussian(rac=rac, qt=prod, title=title) -class LinkElement(Element): +class LinkFluentAPISelector(FluentAPISelector): + """ + Link level of fluent api + """ def __init__(self, plotting: ABCElementPlotting, agg: ResultAnalyzer, src: str, dest: str, kind: str): - Element.__init__(self, plotting, agg) + FluentAPISelector.__init__(self, plotting, agg) self.src = src self.dest = dest self.kind = kind def timeline(self): - links = self.agg.agg_link(self.agg.isrc[self.src], self.agg.idest[self.dest], self.agg.iscn, - self.agg.itime)[self.kind] + """ + Plot timeline graphics. + :return: + """ + links = self.agg.network().node(self.src).link(self.dest).scn().time()[self.kind] title = 'Link %s from %s to %s' % (self.kind, self.src, self.dest) return self.plotting.timeline(links, title) def monotone(self, t: int = None, scn: int = None): - Element.not_both(t, scn) + """ + Plot monotone graphics. + + :param t: focus on t index + :param scn: focus on scn index if t not given + :return: + """ + FluentAPISelector.not_both(t, scn) if t is not None: - y = self.agg.agg_link(self.agg.isrc[self.src], self.agg.idest[self.dest], - self.agg.itime[t], self.agg.iscn)[self.kind].values + y = self.agg.network().node(self.src).link(self.dest).time(t).scn()[self.kind].values title = 'Monotone link from %s to %s at t=%0d' % (self.src, self.dest, t) elif scn is not None: - y = self.agg.agg_link(self.agg.isrc[self.src], self.agg.idest[self.dest], - self.agg.iscn[scn], self.agg.itime)[self.kind].values + y = self.agg.network().node(self.src).link(self.dest).scn(scn).time()[self.kind].values title = 'Monotone link from %s to %s at scn=%0d' % (self.src, self.dest, scn) return self.plotting.monotone(y, title) def gaussian(self, t: int = None, scn: int = None): - Element.not_both(t, scn) + """ + Plot gaussian graphics + + :param t: focus on t index + :param scn: focus on scn index if t not given + :return: + """ + FluentAPISelector.not_both(t, scn) if t is None: - prod = self.agg.agg_link(self.agg.isrc[self.src], self.agg.idest[self.dest], - self.agg.iscn[scn], self.agg.itime)[self.kind].values + prod = self.agg.network().node(self.src).link(self.dest).scn(scn).time()[self.kind].values rac = self.agg.get_rac()[scn, :] title = 'Gaussian link from %s to %s at t=%0d' % (self.src, self.dest, scn) elif scn is None: - prod = self.agg.agg_prod(self.agg.isrc[self.src], self.agg.idest[self.dest], - self.agg.itime[t], self.agg.iscn)[self.kind].values + prod = self.agg.network().node(self.src).link(self.dest).time(t).scn()[self.kind].values rac = self.agg.get_rac()[:, t] title = 'Gaussian link from %s to %s at t=%0d' % (self.src, self.dest, t) return self.plotting.gaussian(rac=rac, qt=prod, title=title) -class NodeElement(Element): +class NodeFluentAPISelector(FluentAPISelector): + """ + Node level of fluent api + """ def __init__(self, plotting: ABCElementPlotting, agg: ResultAnalyzer, node: str): - Element.__init__(self, plotting, agg) + FluentAPISelector.__init__(self, plotting, agg) self.node = node def stack(self, scn: int = 0, prod_kind: str = 'used', cons_kind: str = 'asked'): @@ -201,8 +302,7 @@ def stack(self, scn: int = 0, prod_kind: str = 'used', cons_kind: str = 'asked') areas = [] # stack production with area if p > 0: - prod = self.agg.agg_prod(self.agg.iscn[scn], self.agg.inode[self.node], self.agg.iname, self.agg.itime) \ - .sort_values('cost', ascending=True) + prod = self.agg.network().scn(scn).node(self.node).production().time().sort_values('cost', ascending=False) for i, name in enumerate(prod.index.get_level_values('name').unique()): areas.append((name, prod.loc[name][prod_kind].sort_index().values)) @@ -215,8 +315,7 @@ def stack(self, scn: int = 0, prod_kind: str = 'used', cons_kind: str = 'asked') lines = [] # Stack consumptions with line if c > 0: - cons = self.agg.agg_cons(self.agg.iscn[scn], self.agg.inode[self.node], self.agg.iname, self.agg.itime) \ - .sort_values('cost', ascending=False) + cons = self.agg.network().scn(scn).node(self.node).consumption().time().sort_values('cost', ascending=False) for i, name in enumerate(cons.index.get_level_values('name').unique()): lines.append((name, cons.loc[name][cons_kind].sort_index().values)) @@ -229,9 +328,48 @@ def stack(self, scn: int = 0, prod_kind: str = 'used', cons_kind: str = 'asked') return self.plotting.stack(areas, lines, title) + def consumption(self, name: str, kind: str = 'given') -> ConsumptionFluentAPISelector: + """ + Go to consumption level of fluent API + + :param name: select consumption name + :param kind: kind of data 'asked' or 'given' + :return: + """ + return ConsumptionFluentAPISelector(plotting=self.plotting, agg=self.agg, node=self.node, name=name, kind=kind) + + def production(self, name: str, kind: str = 'used') -> ProductionFluentAPISelector: + """ + Go to production level of fluent API + + :param name: select production name + :param kind: kind of data available ('avail') or 'used' + :return: + """ + return ProductionFluentAPISelector(plotting=self.plotting, agg=self.agg, node=self.node, name=name, kind=kind) + + def link(self, dest: str, kind: str = 'used'): + """ + got to link level of fluent API + + :param dest: select destination node name + :param kind: kind of data available ('avail') or 'used' + :return: + """ + return LinkFluentAPISelector(plotting=self.plotting, agg=self.agg, src=self.node, dest=dest, kind=kind) + + +class NetworkFluentAPISelector(FluentAPISelector): + """ + Network level of fluent API + """ -class NetworkElement(Element): def rac_matrix(self): + """ + plot RAC matrix graphics + + :return: + """ rac = self.agg.get_rac() pct = (rac >= 0).sum() / rac.size * 100 title = "RAC Matrix %0d %% passed" % pct @@ -239,6 +377,15 @@ def rac_matrix(self): return self.plotting.matrix(data=rac, title=title) def map(self, t: int, zoom: int, scn: int = 0, limit: int = None): + """ + Plot map exchange graphics + + :param t: t index to focus + :param zoom: zoom to set + :param scn: scn index to focus + :param limit: color scale limite to use + :return: + """ nodes = {node: self.agg.get_balance(node=node)[scn, t] for node in self.agg.nodes} if limit is None: @@ -246,8 +393,8 @@ def map(self, t: int, zoom: int, scn: int = 0, limit: int = None): lines = {} # Compute lines - links = self.agg.agg_link(self.agg.iscn[scn], self.agg.itime[t], self.agg.isrc, self.agg.idest) - for src in links.index.get_level_values('src').unique(): + links = self.agg.network().scn(scn).time(t).node().link() + for src in links.index.get_level_values('node').unique(): for dest in links.loc[src].index.get_level_values('dest').unique(): exchange = links.loc[src, dest]['used'] # forward exchange -= links.loc[dest, src]['used'] if (dest, src) in links.index else 0 # backward @@ -260,8 +407,16 @@ def map(self, t: int, zoom: int, scn: int = 0, limit: int = None): title = 'Exchange map at t=%0d scn=%0d' % (t, scn) return self.plotting.map_exchange(nodes, lines, limit, title, zoom) + def node(self, node: str): + """ + Go to node level fo fluent API + :param node: node name + :return: NodeFluentAPISelector + """ + return NodeFluentAPISelector(plotting=self.plotting, agg=self.agg, node=node) + -class Plotting(ABC): +class ABCPlotting(ABC): """ Abstract method to plot optimizer result. """ @@ -294,41 +449,10 @@ def __init__(self, agg: ResultAnalyzer, else: self.time_index = np.arange(self.agg.horizon) - def node(self, node: str): - return NodeElement(plotting=self.plotting, agg=self.agg, node=node) - - def consumption(self, node: str, name: str, kind: str = 'given') -> ConsumptionElement: - """ - Plot all timelines consumption scenario. - - :param node: selected node name - :param name: select consumption name - :param kind: kind of data 'asked' or 'given' - :return: - """ - return ConsumptionElement(plotting=self.plotting, agg=self.agg, node=node, name=name, kind=kind) - - def production(self, node: str, name: str, kind: str = 'used') -> ProductionElement: + def network(self): """ - Plot all timelines production scenario. - - :param node: selected node name - :param name: select production name - :param kind: kind of data available ('avail') or 'used' - :return: - """ - return ProductionElement(plotting=self.plotting, agg=self.agg, node=node, name=name, kind=kind) + Entry point to use fluent API. - def link(self, src: str, dest: str, kind: str = 'used'): + :return: NetworkFluentAPISelector """ - Plot all timelines links scenario. - - :param src: selected source node name - :param dest: select destination node name - :param kind: kind of data available ('avail') or 'used' - :return: - """ - return LinkElement(plotting=self.plotting, agg=self.agg, src=src, dest=dest, kind=kind) - - def network(self): - return NetworkElement(plotting=self.plotting, agg=self.agg) + return NetworkFluentAPISelector(plotting=self.plotting, agg=self.agg) diff --git a/hadar/viewer/html.py b/hadar/viewer/html.py index 8f011ad..c53e544 100644 --- a/hadar/viewer/html.py +++ b/hadar/viewer/html.py @@ -12,9 +12,8 @@ import plotly.graph_objects as go from matplotlib.cm import coolwarm -from hadar.analyzer.result import ResultAnalyzer, NodeIndex, SrcIndex, TimeIndex, DestIndex, NameIndex -from hadar.viewer.abc import Plotting, ConsumptionElement, ABCElementPlotting, ProductionElement, LinkElement, \ - NodeElement, NetworkElement +from hadar.analyzer.result import ResultAnalyzer +from hadar.viewer.abc import ABCPlotting, ABCElementPlotting __all__ = ['HTMLPlotting'] @@ -210,7 +209,7 @@ def _plot_links(self, fig: go.Figure, start: str, end: str, color: str, qt: floa line=dict(width=2 * size, color=color))) -class HTMLPlotting(Plotting): +class HTMLPlotting(ABCPlotting): """ Plotting implementation interactive html graphics. (Use plotly) """ @@ -229,7 +228,7 @@ def __init__(self, agg: ResultAnalyzer, unit_symbol: str = '', :param node_coord: nodes coordinates to use for map plotting :param map_element_size: size on element draw on map. default as 1. """ - Plotting.__init__(self, agg, unit_symbol, time_start, time_end, node_coord) + ABCPlotting.__init__(self, agg, unit_symbol, time_start, time_end, node_coord) self.plotting = HTMLElementPlotting(self.unit, self.time_index, self.coord) diff --git a/hadar/viewer/jupyter.py b/hadar/viewer/jupyter.py deleted file mode 100644 index 8c79939..0000000 --- a/hadar/viewer/jupyter.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) 2019-2020, RTE (https://www.rte-france.com) -# See AUTHORS.txt -# This Source Code Form is subject to the terms of the Apache License, version 2.0. -# If a copy of the Apache License, version 2.0 was not distributed with this file, you can obtain one at http://www.apache.org/licenses/LICENSE-2.0. -# SPDX-License-Identifier: Apache-2.0 -# This file is part of hadar-simulator, a python adequacy library for everyone. - -from typing import Dict, List - -import ipywidgets as widgets -import matplotlib -from IPython.display import display, clear_output - -from hadar.analyzer.result import ResultAnalyzer -from hadar.viewer.html import HTMLPlotting - - -__all__ = ['JupyterPlotting'] - - -class JupyterPlotting(HTMLPlotting): - """ - Plotting implementation to use with Jupyter. - Graphics are generated by HTMLPlotting, then jupyter widgets are used to be more flexible. - """ - def __init__(self, agg: ResultAnalyzer, unit_symbol: str = '', - time_start=None, time_end=None, - cmap=matplotlib.cm.coolwarm, - node_coord: Dict[str, List[float]] = None, - map_element_size: int = 1): - """ - Create instance. - - :param agg: ResultAggragator instance to use - :param unit_symbol: symbol on quantity unit used. ex. MW, litter, Go, ... - :param time_start: time to use as the start of study horizon - :param time_end: time to use as the end of study horizon - :param cmap: matplotlib color map to use (coolwarm as default) - :param node_coord: nodes coordinates to use for map plotting - :param map_element_size: size on element draw on map. default as 1. - """ - - HTMLPlotting.__init__(self, agg, unit_symbol, time_start, time_end, cmap, node_coord, map_element_size) - - def _dropmenu(self, plot, items, **kargs): - """ - Wrap html graphics with dropdown menu. - - :param plot: plot function to call when value change - :param items: list of items present in drop down menu - :return: - """ - menu = widgets.Dropdown(options=items, value=items[0], - description='Node:', disabled=False) - output = widgets.Output() - - def _plot(select): - with output: - clear_output() - fig = plot(self, select, **kargs) - fig.show() - - def _on_event(event): - if event['name'] == 'value' and event['type'] == 'change': - _plot(event['new']) - - menu.observe(_on_event) - display(menu, output) - _plot(items[0]) - - def stack(self, node: str = None, prod_kind: str = 'used', cons_kind: str = 'asked'): - """ - Plot with production stacked with area and consumptions stacked by dashed lines. - - :param node: select node to plot. If None, use a dropdown menu to select inside notebook - :param prod_kind: select which prod to stack : available ('avail') or 'used' - :param cons_kind: select which cons to stacl : 'asked' or 'given' - :return: plotly figure or jupyter widget to plot - """ - if node is not None: - return HTMLPlotting.stack(self, node, prod_kind, cons_kind).show() - else: - nodes = list(self.agg.nodes) - self._dropmenu(HTMLPlotting.stack, nodes, prod_kind=prod_kind, cons_kind=cons_kind) - - def _intslider(self, plot, size): - """ - Wrap plot with a intslider. - - :param plot: plot to call when value change - :param size: size of intslider (min=0, step=1) - :return: - """ - slider = widgets.IntSlider(value=0, min=0, max=size, step=1, description='Timestep:', disabled=False, - continuous_update=False, orientation='horizontal', readout=True, readout_format='d') - output = widgets.Output() - - def _plot(select): - with output: - clear_output() - fig = plot(self, select) - fig.show() - - def _on_event(event): - if event['name'] == 'value' and event['type'] == 'change': - _plot(event['new']) - - slider.observe(_on_event) - display(slider, output) - _plot(0) - - def exchanges_map(self, t: int = None): - """ - Plot a map with node (color are balance) and arrow between nodes (color for quantity). - - :param t: timestep to plot - :return: plotly figure or jupyter widget to plot - """ - if t is not None: - return HTMLPlotting.exchanges_map(self, t).show() - else: - h = self.agg.horizon -1 - self._intslider(HTMLPlotting.exchanges_map, h) diff --git a/hadar/workflow/pipeline.py b/hadar/workflow/pipeline.py index f6ad8c6..b817cc7 100644 --- a/hadar/workflow/pipeline.py +++ b/hadar/workflow/pipeline.py @@ -182,7 +182,7 @@ def __add__(self, other): self.stages.append(other) return self - def compute(self, timeline): + def __call__(self, timeline): """ Launch all stages computation. @@ -194,7 +194,7 @@ def compute(self, timeline): self.assert_computable(timeline) for stage in self.stages: - timeline = stage.compute(timeline.copy()) + timeline = stage(timeline.copy()) return timeline @@ -251,7 +251,7 @@ def _process_timeline(self, timeline: pd.DataFrame) -> pd.DataFrame: """ pass - def compute(self, timeline: pd.DataFrame) -> pd.DataFrame: + def __call__(self, timeline: pd.DataFrame) -> pd.DataFrame: """ Launch Stage computation. @@ -392,14 +392,14 @@ class Rename(Stage): Rename column names. """ - def __init__(self, rename: Dict[str, str]): + def __init__(self, **kwargs): """ Initiate Stage. - :param rename: dictionary of strings like { old_name: new_name } + :param kwargs: dictionary of strings like Rename(old_name='new_name') """ - Stage.__init__(self, plug=RestrictedPlug(inputs=list(rename.keys()), outputs=list(rename.values()))) - self.rename = rename + Stage.__init__(self, plug=RestrictedPlug(inputs=list(kwargs.keys()), outputs=list(kwargs.values()))) + self.rename = kwargs def _process_timeline(self, timeline: pd.DataFrame) -> pd.DataFrame: timeline.columns = timeline.columns.map(lambda i: (i[0], self._rename(i[1]))) @@ -424,7 +424,7 @@ def __init__(self, result_name: str): Instance Stage :param result_name: result column name to use for shuffler """ - Rename.__init__(self, {result_name: TO_SHUFFLER}) + Rename.__init__(self, **{result_name: TO_SHUFFLER}) class Drop(Stage): diff --git a/hadar/workflow/shuffler.py b/hadar/workflow/shuffler.py index 20c1d3f..aaa496e 100644 --- a/hadar/workflow/shuffler.py +++ b/hadar/workflow/shuffler.py @@ -75,7 +75,7 @@ def compute(self) -> np.ndarray: :return: data generated by pipeline """ - res = self.pipeline.compute(self.df) + res = self.pipeline(self.df) drop_columns = res.columns.get_level_values(1).unique().drop(TO_SHUFFLER) if drop_columns.any(): res = res.drop(drop_columns, axis=1, level=1) diff --git a/requirements.txt b/requirements.txt index 1181394..c433e70 100644 --- a/requirements.txt +++ b/requirements.txt @@ -3,4 +3,5 @@ numpy ortools plotly matplotlib -requests \ No newline at end of file +requests +progress \ No newline at end of file diff --git a/tests/analyzer/test_result.py b/tests/analyzer/test_result.py index ef774fd..5ed3c50 100644 --- a/tests/analyzer/test_result.py +++ b/tests/analyzer/test_result.py @@ -21,23 +21,23 @@ def test_no_parameters(self): self.assertEqual(True, Index(column='i').all) def test_on_element(self): - i = Index(column='i')['fr'] + i = Index(column='i', index='fr') self.assertEqual(False, i.all) self.assertEqual(('fr',), i.index) def test_list_1(self): - i = Index(column='i')['fr', 'be'] + i = Index(column='i', index=['fr', 'be']) self.assertEqual(False, i.all) self.assertEqual(('fr', 'be'), i.index) def test_list_2(self): l = ['fr', 'be'] - i = Index(column='i')[l] + i = Index(column='i', index=l) self.assertEqual(False, i.all) self.assertEqual(('fr', 'be'), i.index) def test_filter(self): - i = Index(column='i')['fr', 'be'] + i = Index(column='i', index=['fr', 'be']) df = pd.DataFrame(data={'i': ['it', 'fr', 'fr', 'be', 'de', 'it', 'be'], 'a': [0, 1, 2, 3, 4, 5, 6]}) @@ -49,27 +49,32 @@ def test_filter(self): class TestIntIndex(unittest.TestCase): def test_range(self): - i = IntIndex('i')[2:6] + i = IntIndex('i', index=slice(2, 6)) self.assertEqual(False, i.all) self.assertEqual((2, 3, 4, 5), i.index) def test_list(self): - i = IntIndex('i')[2, 6] + i = IntIndex('i', index=[2, 6]) self.assertEqual(False, i.all) self.assertEqual((2, 6), i.index) class TestAnalyzer(unittest.TestCase): def setUp(self) -> None: - self.study = Study(['a', 'b', 'c'], horizon=3, nb_scn=2) \ - .add_on_node('a', data=Consumption(cost=10 ** 3, quantity=[[120, 12, 12], [12, 120, 120]], name='load')) \ - .add_on_node('a', data=Consumption(cost=10 ** 3, quantity=[[130, 13, 13], [13, 130, 130]], name='car')) \ - .add_on_node('a', data=Production(cost=10, quantity=[[130, 13, 13], [13, 130, 130]], name='prod')) \ - .add_on_node('b', data=Consumption(cost=10 ** 3, quantity=[[120, 12, 12], [12, 120, 120]], name='load')) \ - .add_on_node('b', data=Production(cost=20, quantity=[[110, 11, 11], [11, 110, 110]], name='prod')) \ - .add_on_node('b', data=Production(cost=20, quantity=[[120, 12, 12], [12, 120, 120]], name='nuclear')) \ - .add_link(src='a', dest='b', quantity=[[110, 11, 11], [11, 110, 110]], cost=2) \ - .add_link(src='a', dest='c', quantity=[[120, 12, 12], [12, 120, 120]], cost=2) + self.study = Study(horizon=3, nb_scn=2)\ + .network()\ + .node('a')\ + .consumption(cost=10 ** 3, quantity=[[120, 12, 12], [12, 120, 120]], name='load')\ + .consumption(cost=10 ** 3, quantity=[[130, 13, 13], [13, 130, 130]], name='car')\ + .production(cost=10, quantity=[[130, 13, 13], [13, 130, 130]], name='prod')\ + .node('b')\ + .consumption(cost=10 ** 3, quantity=[[120, 12, 12], [12, 120, 120]], name='load')\ + .production(cost=20, quantity=[[110, 11, 11], [11, 110, 110]], name='prod')\ + .production(cost=20, quantity=[[120, 12, 12], [12, 120, 120]], name='nuclear')\ + .node('c')\ + .link(src='a', dest='b', quantity=[[110, 11, 11], [11, 110, 110]], cost=2)\ + .link(src='a', dest='c', quantity=[[120, 12, 12], [12, 120, 120]], cost=2)\ + .build() out = { 'a': OutputNode(consumptions=[OutputConsumption(cost=10 ** 3, quantity=[[20, 2, 2], [2, 20, 20]], name='load'), @@ -119,12 +124,12 @@ def test_build_link(self): exp = pd.DataFrame(data={'cost': [2] * 12, 'avail': [110, 11, 11, 11, 110, 110, 120, 12, 12, 12, 120, 120], 'used': [10, 1, 1, 1, 10, 10, 20, 2, 2, 2, 20, 20], - 'src': ['a'] * 12, + 'node': ['a'] * 12, 'dest': ['b'] * 6 + ['c'] * 6, 't': [0, 1, 2] * 4, 'scn': [0, 0, 0, 1, 1, 1] * 2}, dtype=float) - link = ResultAnalyzer.link(self.study, self.result) + link = ResultAnalyzer._build_link(self.study, self.result) pd.testing.assert_frame_equal(exp, link) @@ -136,7 +141,7 @@ def test_aggregate_cons(self): 'given': [20, 2, 2]}, dtype=float, index=index) agg = ResultAnalyzer(study=self.study, result=self.result) - cons = agg.agg_cons(agg.iscn[0], agg.inode['a'], agg.iname['load'], agg.itime) + cons = agg.network().scn(0).node('a').consumption('load').time() pd.testing.assert_frame_equal(exp_cons, cons) @@ -150,7 +155,7 @@ def test_aggregate_prod(self): 'used': [30, 3, 3, 10, 1, 1]}, dtype=float, index=index) agg = ResultAnalyzer(study=self.study, result=self.result) - cons = agg.agg_prod(agg.iscn[0], agg.inode['a', 'b'], agg.iname['prod'], agg.itime) + cons = agg.network().scn(0).node(['a', 'b']).production('prod').time() pd.testing.assert_frame_equal(exp_cons, cons) @@ -164,7 +169,7 @@ def test_aggregate_link(self): 'used': [10, 1, 1, 20, 2, 2]}, dtype=float, index=index) agg = ResultAnalyzer(study=self.study, result=self.result) - cons = agg.agg_link(agg.iscn[0], agg.isrc['a'], agg.idest['b', 'c'], agg.itime) + cons = agg.network().scn(0).node('a').link(['b', 'c']).time() pd.testing.assert_frame_equal(exp_cons, cons) diff --git a/tests/optimizer/it/test_optimizer.py b/tests/optimizer/it/test_optimizer.py index 8995687..2bcc341 100644 --- a/tests/optimizer/it/test_optimizer.py +++ b/tests/optimizer/it/test_optimizer.py @@ -34,11 +34,14 @@ def test_merit_order(self): | gas: 5 | :return: """ - study = hd.Study(['a'], horizon=3, nb_scn=2) \ - .add_on_node(node='a', data=hd.Consumption(name='load', cost=10 ** 6, quantity=[[30, 6, 6], [6, 30, 30]])) \ - .add_on_node(node='a', data=hd.Production(name='nuclear', cost=20, quantity=[[15, 3, 3], [3, 15, 15]])) \ - .add_on_node(node='a', data=hd.Production(name='solar', cost=10, quantity=[[10, 2, 2], [2, 10, 10]])) \ - .add_on_node(node='a', data=hd.Production(name='oil', cost=30, quantity=[[10, 2, 2], [2, 10, 10]])) + study = hd.Study(horizon=3, nb_scn=2)\ + .network()\ + .node('a')\ + .consumption(name='load', cost=10 ** 6, quantity=[[30, 6, 6], [6, 30, 30]])\ + .production(name='nuclear', cost=20, quantity=[[15, 3, 3], [3, 15, 15]])\ + .production(name='solar', cost=10, quantity=[[10, 2, 2], [2, 10, 10]])\ + .production(name='oil', cost=30, quantity=[[10, 2, 2], [2, 10, 10]])\ + .build() nodes_expected = dict() nodes_expected['a'] = hd.OutputNode( @@ -68,12 +71,16 @@ def test_exchange_two_nodes(self): :return: """ # Input - study = hd.Study(['a', 'b'], horizon=2) \ - .add_on_node('a', data=hd.Consumption(cost=10 ** 6, quantity=[20, 200], name='load')) \ - .add_on_node('a', data=hd.Production(cost=10, quantity=[30, 300], name='prod')) \ - .add_on_node('b', data=hd.Consumption(cost=10 ** 6, quantity=[20, 200], name='load')) \ - .add_on_node('b', data=hd.Production(cost=20, quantity=[10, 100], name='prod')) \ - .add_link(src='a', dest='b', quantity=[10, 100], cost=2) + study = hd.Study(horizon=2)\ + .network()\ + .node('a')\ + .consumption(cost=10 ** 6, quantity=[20, 200], name='load')\ + .production(cost=10, quantity=[30, 300], name='prod')\ + .node('b')\ + .consumption(cost=10 ** 6, quantity=[20, 200], name='load')\ + .production(cost=20, quantity=[10, 100], name='prod')\ + .link(src='a', dest='b', quantity=[10, 100], cost=2)\ + .build() nodes_expected = {} nodes_expected['a'] = hd.OutputNode( @@ -110,15 +117,20 @@ def test_exchange_two_concurrent_nodes(self): | nuclear: 0 | :return: """ - study = hd.Study(node_names=['a', 'b', 'c'], horizon=1) \ - .add_on_node('a', data=hd.Consumption(cost=10 ** 6, quantity=[10], name='load')) \ - .add_on_node('a', data=hd.Production(cost=10, quantity=[30], name='nuclear')) \ - .add_on_node('b', data=hd.Consumption(cost=10 ** 6, quantity=[10], name='load')) \ - .add_on_node('b', data=hd.Production(cost=20, quantity=[10], name='nuclear')) \ - .add_on_node('c', data=hd.Consumption(cost=10 ** 6, quantity=[10], name='load')) \ - .add_on_node('c', data=hd.Production(cost=20, quantity=[10], name='nuclear')) \ - .add_link(src='a', dest='b', quantity=[20], cost=2) \ - .add_link(src='a', dest='c', quantity=[20], cost=2) + study = hd.Study(horizon=1)\ + .network()\ + .node('a')\ + .consumption(cost=10 ** 6, quantity=10, name='load')\ + .production(cost=10, quantity=30, name='nuclear')\ + .node('b')\ + .consumption(cost=10 ** 6, quantity=10, name='load')\ + .production(cost=20, quantity=10, name='nuclear')\ + .node('c')\ + .consumption(cost=10 ** 6, quantity=10, name='load')\ + .production(cost=20, quantity=10, name='nuclear')\ + .link(src='a', dest='b', quantity=20, cost=2)\ + .link(src='a', dest='c', quantity=20, cost=2)\ + .build() nodes_expected = {} nodes_expected['a'] = hd.OutputNode( @@ -154,12 +166,14 @@ def test_exchange_link_saturation(self): :return: """ - study = hd.Study(node_names=['a', 'b', 'c'], horizon=1) \ - .add_on_node('a', data=hd.Production(cost=10, quantity=[30], name='nuclear')) \ - .add_on_node('b', data=hd.Consumption(cost=10 ** 6, quantity=[10], name='load')) \ - .add_on_node('c', data=hd.Consumption(cost=10 ** 6, quantity=[20], name='load')) \ - .add_link(src='a', dest='b', quantity=[20], cost=2) \ - .add_link(src='b', dest='c', quantity=[15], cost=2) + study = hd.Study(horizon=1)\ + .network()\ + .node('a').production(cost=10, quantity=[30], name='nuclear')\ + .node('b').consumption(cost=10 ** 6, quantity=[10], name='load')\ + .node('c').consumption(cost=10 ** 6, quantity=[20], name='load')\ + .link(src='a', dest='b', quantity=[20], cost=2)\ + .link(src='b', dest='c', quantity=[15], cost=2)\ + .build() nodes_expected = {} nodes_expected['a'] = hd.OutputNode(productions=[hd.OutputProduction(cost=10, quantity=[[20]], name='nuclear')], @@ -196,15 +210,20 @@ def test_consumer_cancel_exchange(self): :return: """ - study = hd.Study(node_names=['a', 'b', 'c'], horizon=1) \ - .add_on_node('a', data=hd.Consumption(cost=10 ** 6, quantity=[10], name='load')) \ - .add_on_node('a', data=hd.Production(cost=10, quantity=[20], name='nuclear')) \ - .add_on_node('b', data=hd.Consumption(cost=10 ** 6, quantity=[5], name='load')) \ - .add_on_node('b', data=hd.Production(cost=20, quantity=[15], name='nuclear')) \ - .add_on_node('c', data=hd.Consumption(cost=10 ** 6, quantity=[20], name='load')) \ - .add_on_node('c', data=hd.Production(cost=10, quantity=[10], name='nuclear')) \ - .add_link(src='a', dest='b', quantity=[20], cost=2) \ - .add_link(src='b', dest='c', quantity=[20], cost=2) + study = hd.Study(horizon=1)\ + .network()\ + .node('a')\ + .consumption(cost=10 ** 6, quantity=10, name='load')\ + .production(cost=10, quantity=20, name='nuclear')\ + .node('b')\ + .consumption(cost=10 ** 6, quantity=5, name='load')\ + .production(cost=20, quantity=15, name='nuclear')\ + .node('c')\ + .consumption(cost=10 ** 6, quantity=20, name='load')\ + .production(cost=10, quantity=10, name='nuclear')\ + .link(src='a', dest='b', quantity=20, cost=2)\ + .link(src='b', dest='c', quantity=20, cost=2)\ + .build() nodes_expected = {} nodes_expected['a'] = hd.OutputNode( @@ -254,14 +273,19 @@ def test_many_links_on_node(self): :return: """ - study = hd.Study(node_names=['a', 'b', 'c'], horizon=2) \ - .add_on_node('a', data=hd.Consumption(cost=10 ** 6, quantity=10, name='load')) \ - .add_on_node('a', data=hd.Production(cost=80, quantity=20, name='gas')) \ - .add_on_node('b', data=hd.Consumption(cost=10 ** 6, quantity=[15, 25], name='load')) \ - .add_on_node('c', data=hd.Production(cost=50, quantity=30, name='nuclear')) \ - .add_link(src='a', dest='b', quantity=20, cost=10) \ - .add_link(src='c', dest='a', quantity=20, cost=10) \ - .add_link(src='c', dest='b', quantity=15, cost=10) + study = hd.Study(horizon=2)\ + .network()\ + .node('a')\ + .consumption(cost=10 ** 6, quantity=10, name='load')\ + .production(cost=80, quantity=20, name='gas')\ + .node('b')\ + .consumption(cost=10 ** 6, quantity=[15, 25], name='load')\ + .node('c')\ + .production(cost=50, quantity=30, name='nuclear')\ + .link(src='a', dest='b', quantity=20, cost=10)\ + .link(src='c', dest='a', quantity=20, cost=10)\ + .link(src='c', dest='b', quantity=15, cost=10)\ + .build() nodes_expected = {} diff --git a/tests/optimizer/lp/test_mapper.py b/tests/optimizer/lp/test_mapper.py index acf930a..0609550 100644 --- a/tests/optimizer/lp/test_mapper.py +++ b/tests/optimizer/lp/test_mapper.py @@ -18,10 +18,14 @@ class TestInputMapper(unittest.TestCase): def test_map_input(self): # Input - study = Study(['a', 'be'], horizon=2, nb_scn=2) \ - .add_on_node('a', Consumption(name='load', quantity=[[10, 1], [20, 2]], cost=10)) \ - .add_on_node('a', Production(name='nuclear', quantity=[[12, 2], [21, 20]], cost=10)) \ - .add_link(src='a', dest='be', quantity=[[10, 3], [20, 30]], cost=2) + study = Study(horizon=2, nb_scn=2) \ + .network()\ + .node('a')\ + .consumption(name='load', quantity=[[10, 1], [20, 2]], cost=10)\ + .production(name='nuclear', quantity=[[12, 2], [21, 20]], cost=10)\ + .node('be')\ + .link(src='a', dest='be', quantity=[[10, 3], [20, 30]], cost=2)\ + .build() s = MockSolver() @@ -48,10 +52,14 @@ def test_map_input(self): class TestOutputMapper(unittest.TestCase): def test_map_output(self): # Input - study = Study(['a', 'be'], horizon=2, nb_scn=2) \ - .add_on_node('a', Consumption(name='load', quantity=[[10, 1], [20, 2]], cost=10)) \ - .add_on_node('a', Production(name='nuclear', quantity=[[12, 2], [21, 20]], cost=10)) \ - .add_link(src='a', dest='be', quantity=[[10, 3], [20, 30]], cost=2) + study = Study(horizon=2, nb_scn=2) \ + .network()\ + .node('a')\ + .consumption(name='load', quantity=[[10, 1], [20, 2]], cost=10)\ + .production(name='nuclear', quantity=[[12, 2], [21, 20]], cost=10)\ + .node('be')\ + .link(src='a', dest='be', quantity=[[10, 3], [20, 30]], cost=2)\ + .build() s = MockSolver() mapper = OutputMapper(study=study) diff --git a/tests/optimizer/lp/test_optimizer.py b/tests/optimizer/lp/test_optimizer.py index 388d029..96ae57e 100644 --- a/tests/optimizer/lp/test_optimizer.py +++ b/tests/optimizer/lp/test_optimizer.py @@ -76,8 +76,8 @@ def test_add_node(self): class TestSolve(unittest.TestCase): def test_solve_batch(self): # Input - study = Study(node_names=['a'], horizon=1, nb_scn=1) \ - .add_on_node(node='a', data=Consumption(name='load', cost=10, quantity=[10])) + study = Study(horizon=1, nb_scn=1) \ + .network().node('a').consumption(name='load', cost=10, quantity=10).build() # Mock solver = MockSolver() @@ -115,8 +115,8 @@ def test_solve_batch(self): def test_solve(self): # Input - study = Study(node_names=['a'], horizon=1, nb_scn=1) \ - .add_on_node(node='a', data=Consumption(name='load', cost=10, quantity=[10])) + study = Study(horizon=1, nb_scn=1) \ + .network().node('a').consumption(name='load', cost=10, quantity=10).build() # Expected out_a = OutputNode(consumptions=[OutputConsumption(name='load', cost=10, quantity=[0])], diff --git a/tests/optimizer/remote/test_optimizer.py b/tests/optimizer/remote/test_optimizer.py index d8bc96e..d37279a 100644 --- a/tests/optimizer/remote/test_optimizer.py +++ b/tests/optimizer/remote/test_optimizer.py @@ -7,63 +7,118 @@ import pickle import unittest +from typing import Dict, List, Tuple from unittest.mock import MagicMock +from hadar import RemoteOptimizer from hadar.optimizer.input import Study, Consumption from hadar.optimizer.output import Result, OutputConsumption, OutputNode -from hadar.optimizer.remote.optimizer import _solve_remote_wrap - - -class MockRequest: - pass +from hadar.optimizer.remote.optimizer import _solve_remote_wrap, ServerError class MockResponse: def __init__(self, content, code=200): - self.content = content + self.content = pickle.dumps(content) self.status_code = code + +class MockRequest: + def __init__(self, unit: unittest.TestCase, post: List[Dict], get: List[Dict]): + self.unit = unit + self._post = post + self._get = get + + @staticmethod + def cut_url(url): + return url[4:] # Remove 'host at the beginning + + def get(self, url, params): + self.unit.assertEqual(self._get[0]['url'], MockRequest.cut_url(url)) + self.unit.assertEqual(self._get[0]['params'], params) + res = self._get[0]['res'] + del self._get[0] + return res + + def post(self, url, params, data): + self.unit.assertEqual(self._post[0]['url'], MockRequest.cut_url(url)) + self.unit.assertEqual(self._post[0]['params'], params) + self.unit.assertEqual(pickle.dumps(self._post[0]['data']), data) + res = self._post[0]['res'] + del self._post[0] + return res + + class RemoteOptimizerTest(unittest.TestCase): def setUp(self) -> None: - self.study = Study(node_names=['a'], horizon=1) \ - .add_on_node('a', data=Consumption(cost=0, quantity=[0], name='load')) + self.study = Study(horizon=1) \ + .network().node('a').consumption(cost=0, quantity=[0], name='load').build() self.result = Result(nodes={ 'a': OutputNode(consumptions=[OutputConsumption(cost=0, quantity=[0], name='load')], productions=[], links=[])}) - def test_success(self): - requests = MockRequest() - requests.post = MagicMock(return_value=MockResponse(pickle.dumps(self.result))) - - _solve_remote_wrap(study=self.study, url='localhost', token='pwd', rqt=requests) - - requests.post.assert_called_with(data=pickle.dumps(self.study), url='localhost', params={'token': 'pwd'}) + def test_job_terminated(self): + requests = MockRequest(unit=self, + post=[dict(url='/study', params={'token': 'pwd'}, data=self.study, + res=MockResponse({'job': 'myid', 'status': 'QUEUED', 'progress': 1})) + ], + get=[dict(url='/result/myid', params={'token': 'pwd'}, + res=MockResponse({'status': 'QUEUED', 'progress': 1})), + dict(url='/result/myid', params={'token': 'pwd'}, + res=MockResponse({'status': 'COMPUTING', 'progress': 0})), + dict(url='/result/myid', params={'token': 'pwd'}, + res=MockResponse({'status': 'TERMINATED', 'result': 'myresult'})) + ]) + + res = _solve_remote_wrap(study=self.study, url='host', token='pwd', rqt=requests) + self.assertEqual('myresult', res) + + def test_job_error(self): + requests = MockRequest(unit=self, + post=[dict(url='/study', params={'token': 'pwd'}, data=self.study, + res=MockResponse({'job': 'myid', 'status': 'QUEUED', 'progress': 1})) + ], + get=[dict(url='/result/myid', params={'token': 'pwd'}, + res=MockResponse({'status': 'QUEUED', 'progress': 1})), + dict(url='/result/myid', params={'token': 'pwd'}, + res=MockResponse({'status': 'COMPUTING', 'progress': 0})), + dict(url='/result/myid', params={'token': 'pwd'}, + res=MockResponse({'status': 'ERROR', 'message': 'HUGE ERROR'})) + ]) + + self.assertRaises(ServerError, + lambda: _solve_remote_wrap(study=self.study, url='host', token='pwd', rqt=requests)) def test_404(self): - requests = MockRequest() + requests = MockRequest(unit=self, + post=[dict(url='/study', params={'token': 'pwd'}, data=self.study, + res=MockResponse(None, 404))], + get=[]) requests.post = MagicMock(return_value=MockResponse(content=None, code=404)) self.assertRaises(ValueError, - lambda: _solve_remote_wrap(study=self.study, url='localhost', token='pwd', rqt=requests)) - - requests.post.assert_called_with(data=pickle.dumps(self.study), url='localhost', params={'token': 'pwd'}) + lambda: _solve_remote_wrap(study=self.study, url='host', token='pwd', rqt=requests)) def test_403(self): - requests = MockRequest() - requests.post = MagicMock(return_value=MockResponse(content=None, code=403)) + requests = MockRequest(unit=self, + post=[dict(url='/study', params={'token': 'pwd'}, data=self.study, + res=MockResponse(None, 403))], + get=[]) self.assertRaises(ValueError, - lambda: _solve_remote_wrap(study=self.study, url='localhost', token='pwd', rqt=requests)) - - requests.post.assert_called_with(data=pickle.dumps(self.study), url='localhost', params={'token': 'pwd'}) + lambda: _solve_remote_wrap(study=self.study, url='host', token='pwd', rqt=requests)) def test_500(self): - requests = MockRequest() - requests.post = MagicMock(return_value=MockResponse(content=None, code=500)) + requests = MockRequest(unit=self, + post=[dict(url='/study', params={'token': 'pwd'}, data=self.study, + res=MockResponse(None, 500))], + get=[]) self.assertRaises(IOError, - lambda: _solve_remote_wrap(study=self.study, url='localhost', token='pwd', rqt=requests)) + lambda: _solve_remote_wrap(study=self.study, url='host', token='pwd', rqt=requests)) - requests.post.assert_called_with(data=pickle.dumps(self.study), url='localhost', params={'token': 'pwd'}) \ No newline at end of file + def no_test_server(self): + optim = RemoteOptimizer(url='http://localhost:5000') + res = optim.solve(self.study) + print(res) \ No newline at end of file diff --git a/tests/optimizer/test_input.py b/tests/optimizer/test_input.py index 7400866..f62b11c 100644 --- a/tests/optimizer/test_input.py +++ b/tests/optimizer/test_input.py @@ -18,98 +18,120 @@ def test_create_study(self): p = Production(name='nuclear', cost=20, quantity=10) l = Link(dest='a', cost=20, quantity=10) - study = Study(['a', 'b'], horizon=1) \ - .add_on_node(node='a', data=c) \ - .add_on_node(node='a', data=p) \ - .add_link(src='b', dest='a', cost=20, quantity=10) + study = Study(horizon=1) \ + .network() \ + .node('a') \ + .consumption(name='load', cost=20, quantity=10) \ + .production(name='nuclear', cost=20, quantity=10) \ + .node('b') \ + .link(src='b', dest='a', cost=20, quantity=10) \ + .build() self.assertEqual(c, study.nodes['a'].consumptions[0]) self.assertEqual(p, study.nodes['a'].productions[0]) self.assertEqual(l, study.nodes['b'].links[0]) self.assertEqual(1, study.horizon) - def test_wrong_node_list(self): - def test(): - study = Study(node_names=['fr', 'be', 'de', 'be'], horizon=1) - - self.assertRaises(ValueError, test) def test_wrong_production_cost(self): def test(): - study = Study(node_names=['fr'], horizon=1) \ - .add_on_node(node='fr', data=Production(name='solar', cost=-1, quantity=10)) + study = Study(horizon=1) \ + .network().node('fr').production(name='solar', cost=-1, quantity=10).build() self.assertRaises(ValueError, test) def test_wrong_production_quantity(self): def test(): - study = Study(node_names=['fr'], horizon=1) \ - .add_on_node(node='fr', data=Production(name='solar', cost=10, quantity=-1)) + study = Study(horizon=1) \ + .network().node('fr').production(name='solar', cost=1, quantity=-10).build() self.assertRaises(ValueError, test) def test_wrong_production_name(self): def test(): - study = Study(node_names=['fr'], horizon=1) \ - .add_on_node(node='fr', data=Production(name='solar', cost=10, quantity=10)) \ - .add_on_node(node='fr', data=Production(name='solar', cost=10, quantity=10)) + study = Study(horizon=1) \ + .network()\ + .node('fr')\ + .production(name='solar', cost=1, quantity=-10)\ + .production(name='solar', cost=1, quantity=-10)\ + .build() self.assertRaises(ValueError, test) def test_wrong_consumption_cost(self): def test(): - study = Study(node_names=['fr'], horizon=1) \ - .add_on_node(node='fr', data=Consumption(name='load', cost=-10, quantity=10)) + study = Study(horizon=1) \ + .network().node('fr').consumption(name='load', cost=-1, quantity=10).build() self.assertRaises(ValueError, test) def test_wrong_consumption_quantity(self): def test(): - study = Study(node_names=['fr'], horizon=1) \ - .add_on_node(node='fr', data=Consumption(name='load', cost=10, quantity=-10)) + study = Study(horizon=1) \ + .network().node('fr').consumption(name='load', cost=1, quantity=-10).build() self.assertRaises(ValueError, test) def test_wrong_consumption_name(self): def test(): - study = Study(node_names=['fr'], horizon=1) \ - .add_on_node(node='fr', data=Consumption(name='load', cost=10, quantity=10)) \ - .add_on_node(node='fr', data=Consumption(name='load', cost=10, quantity=10)) + study = Study(horizon=1) \ + .network()\ + .node('fr')\ + .consumption(name='load', cost=1, quantity=-10)\ + .consumption(name='load', cost=1, quantity=-10)\ + .build() + self.assertRaises(ValueError, test) def test_wrong_link_cost(self): def test(): - study = Study(node_names=['fr', 'be'], horizon=1) \ - .add_link(src='fr', dest='be', cost=-10, quantity=10) + study = Study(horizon=1) \ + .network()\ + .node('fr')\ + .node('be')\ + .link(src='fr', dest='be', cost=-10, quantity=10)\ + .build() self.assertRaises(ValueError, test) def test_wrong_link_quantity(self): def test(): - study = Study(node_names=['fr', 'be'], horizon=1) \ - .add_link(src='fr', dest='be', cost=10, quantity=-10) + study = Study(horizon=1) \ + .network()\ + .node('fr')\ + .node('be')\ + .link(src='fr', dest='be', cost=10, quantity=-10)\ + .build() self.assertRaises(ValueError, test) def test_wrong_link_dest_not_node(self): def test(): - study = Study(node_names=['fr', 'be'], horizon=1) \ - .add_link(src='fr', dest='it', cost=10, quantity=10) + study = Study(horizon=1) \ + .network() \ + .node('fr') \ + .node('be') \ + .link(src='fr', dest='it', cost=10, quantity=10) \ + .build() self.assertRaises(ValueError, test) def test_wrong_link_dest_not_unique(self): def test(): - study = Study(node_names=['fr', 'be'], horizon=1) \ - .add_link(src='fr', dest='be', cost=10, quantity=10) \ - .add_link(src='fr', dest='be', cost=10, quantity=10) + study = Study(horizon=1) \ + .network() \ + .node('fr') \ + .node('be') \ + .link(src='fr', dest='be', cost=10, quantity=10) \ + .link(src='fr', dest='be', cost=10, quantity=10) \ + .build() self.assertRaises(ValueError, test) def test_validate_quantity_perfect_size(self): # Input - study = Study(node_names=['a'], horizon=10, nb_scn=2) + study = Study(horizon=10, nb_scn=2).network().build() i = np.ones((2, 10)) # Test @@ -118,7 +140,7 @@ def test_validate_quantity_perfect_size(self): def test_validate_quantity_expend_scn(self): # Input - study = Study(node_names=[], horizon=5, nb_scn=2) + study = Study(horizon=5, nb_scn=2).network().build() i = [1, 2, 3, 4, 5] # Expect @@ -131,7 +153,7 @@ def test_validate_quantity_expend_scn(self): def test_validate_quantity_expend_horizon(self): # Input - study = Study(node_names=[], horizon=2, nb_scn=5) + study = Study(horizon=2, nb_scn=5).network().build() i = [[1], [2], [3], [4], [5]] # Expect @@ -147,7 +169,7 @@ def test_validate_quantity_expend_horizon(self): def test_validate_quantity_expend_both(self): # Input - study = Study(node_names=[], horizon=2, nb_scn=3) + study = Study(horizon=2, nb_scn=3).network().build() i = 1 # Expect @@ -159,10 +181,10 @@ def test_validate_quantity_expend_both(self): def test_validate_quantity_wrong_size(self): # Input - study = Study(node_names=[], horizon=2) + study = Study( horizon=2).network().build() self.assertRaises(ValueError, lambda: study._validate_quantity([4, 5, 1])) def test_validate_quantity_negative(self): # Input - study = Study(node_names=[], horizon=3) + study = Study(horizon=3).network().build() self.assertRaises(ValueError, lambda: study._validate_quantity([4, -5, 1])) \ No newline at end of file diff --git a/tests/viewer/test_html.py b/tests/viewer/test_html.py index 94844fd..952ae60 100644 --- a/tests/viewer/test_html.py +++ b/tests/viewer/test_html.py @@ -18,15 +18,18 @@ class TestHTMLPlotting(unittest.TestCase): def setUp(self) -> None: - self.study = Study(['a', 'b'], horizon=3, nb_scn=2) \ - .add_on_node('a', data=Consumption(cost=10 ** 6, quantity=[[20, 10, 2], [10, 5, 3]], name='load')) \ - .add_on_node('a', data=Consumption(cost=10 ** 6, quantity=[[30, 15, 3], [15, 7, 2]], name='car')) \ - .add_on_node('a', data=Production(cost=10, quantity=[[60, 30, 5], [30, 15, 3]], name='prod')) \ - \ - .add_on_node('b', data=Consumption(cost=10 ** 6, quantity=[[40, 20, 2], [20, 10, 1]], name='load')) \ - .add_on_node('b', data=Production(cost=20, quantity=[[10, 5, 1], [5, 3, 1]], name='prod')) \ - .add_on_node('b', data=Production(cost=30, quantity=[[20, 10, 2], [10, 5, 1]], name='nuclear')) \ - .add_link(src='a', dest='b', quantity=[[10, 10, 10], [5, 5, 5]], cost=2) + self.study = Study(horizon=3, nb_scn=2)\ + .network()\ + .node('a')\ + .consumption(cost=10 ** 6, quantity=[[20, 10, 2], [10, 5, 3]], name='load')\ + .consumption(cost=10 ** 6, quantity=[[30, 15, 3], [15, 7, 2]], name='car')\ + .production(cost=10, quantity=[[60, 30, 5], [30, 15, 3]], name='prod')\ + .node('b')\ + .consumption(cost=10 ** 6, quantity=[[40, 20, 2], [20, 10, 1]], name='load')\ + .production(cost=20, quantity=[[10, 5, 1], [5, 3, 1]], name='prod')\ + .production(cost=30, quantity=[[20, 10, 2], [10, 5, 1]], name='nuclear')\ + .link(src='a', dest='b', quantity=[[10, 10, 10], [5, 5, 5]], cost=2)\ + .build() optimizer = LPOptimizer() self.result = optimizer.solve(study=self.study) @@ -38,7 +41,7 @@ def setUp(self) -> None: self.hash = hashlib.sha3_256() def test_stack(self): - fig = self.plot.node('a').stack(scn=0) + fig = self.plot.network().node('a').stack(scn=0) self.assert_fig_hash('d9f9f004b98ca62be934d69d4fd0c1a302512242', fig) def test_map_exchanges(self): @@ -47,23 +50,23 @@ def test_map_exchanges(self): self.assert_fig_hash('49d81d1457b2ac78e1fc6ae4c1fc6215b8a0bbe4', fig) def test_plot_timeline(self): - fig = self.plot.consumption(node='a', name='load').timeline() + fig = self.plot.network().node('a').consumption('load').timeline() self.assert_fig_hash('ba776202b252c9df5c81ca869b2e2d85e56e5589', fig) - fig = self.plot.production(node='b', name='nuclear').timeline() + fig = self.plot.network().node('b').production('nuclear').timeline() self.assert_fig_hash('33baf5d01fda12b6a2d025abf8421905fc24abe1', fig) - fig = self.plot.link(src='a', dest='b').timeline() + fig = self.plot.network().node('a').link('b').timeline() self.assert_fig_hash('0c87d1283db5250858b14e2240d30f9059459e65', fig) def test_plot_monotone(self): - fig = self.plot.consumption(node='a', name='load').monotone(scn=0) + fig = self.plot.network().node('a').consumption('load').monotone(scn=0) self.assert_fig_hash('1ffa51a52b066aab8cabb817c11fd1272549eb9d', fig) - fig = self.plot.production(node='b', name='nuclear').monotone(t=0) + fig = self.plot.network().node('b').production('nuclear').monotone(t=0) self.assert_fig_hash('e059878aac45330810578482df8c3d19261f7f75', fig) - fig = self.plot.link(src='a', dest='b').monotone(scn=0) + fig = self.plot.network().node('a').link('b').monotone(scn=0) self.assert_fig_hash('1d5dba9e2189c741e5daa36d69ff1a879f169964', fig) def test_rac_heatmap(self): @@ -71,13 +74,13 @@ def test_rac_heatmap(self): self.assert_fig_hash('2b87a4e781e9eeb532f5d2b091c474bb0de625fd', fig) def test_gaussian(self): - fig = self.plot.consumption(node='a', name='load').gaussian(scn=0) + fig = self.plot.network().node('a').consumption('load').gaussian(scn=0) self.assert_fig_hash('4f3676a65cde6c268233679e1d0e6207df62764d', fig) - fig = self.plot.production(node='b', name='nuclear').gaussian(t=0) + fig = self.plot.network().node('b').production('nuclear').gaussian(t=0) # Fail devops self.assert_fig_hash('45ffe15df1d72829ebe2283c9c4b65ee8465c978', fig) - fig = self.plot.link(src='a', dest='b').gaussian(scn=0) + fig = self.plot.network().node('a').link('b').gaussian(scn=0) self.assert_fig_hash('52620565ce8ea670b18707cccf30594b5c3d58ea', fig) def assert_fig_hash(self, expected: str, fig: go.Figure): diff --git a/tests/workflow/test_integration.py b/tests/workflow/test_integration.py index 2f54750..2f93d80 100644 --- a/tests/workflow/test_integration.py +++ b/tests/workflow/test_integration.py @@ -26,12 +26,12 @@ def test_pipeline(self): i = pd.DataFrame(data={'data': np.ones(1000) * 100}) pipe = RepeatScenario(n=500) + \ - Rename(rename={'data': 'quantity'}) + \ + Rename(data='quantity') + \ Fault(loss=10, occur_freq=0.1, downtime_min=5, downtime_max=10) +\ Clip(lower=80) # Test - o = pipe.compute(i) + o = pipe(i) # Verify io interfaces self.assertEqual(['data'], pipe.plug.inputs) diff --git a/tests/workflow/test_pipeline.py b/tests/workflow/test_pipeline.py index 9046156..21e9b9f 100644 --- a/tests/workflow/test_pipeline.py +++ b/tests/workflow/test_pipeline.py @@ -136,7 +136,7 @@ def test_compute(self): exp = pd.DataFrame({(0, 'a'): [4, 8, 12]}) # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp, o) def test_add(self): @@ -149,7 +149,7 @@ def test_add(self): exp = pd.DataFrame({(0, 'd'): [1, 1, 1], (0, 'r'): [0, 0, 0]}, dtype=float) # Test & Verify - o = pipe.compute(i) + o = pipe(i) self.assertEqual(3, len(pipe.stages)) self.assertIsInstance(pipe.plug, RestrictedPlug) pd.testing.assert_frame_equal(exp, o) @@ -163,7 +163,7 @@ def test_link_pipeline_free_to_free(self): exp = pd.DataFrame({(0, 'a'): [2, 4, 6], (0, 'b'): [8, 9, 9]}) # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp, o) self.assertEqual([], pipe.plug.inputs) self.assertEqual([], pipe.plug.outputs) @@ -177,7 +177,7 @@ def test_link_pipeline_free_to_restricted(self): exp = pd.DataFrame({(0, 'd'): [2, 4, 5], (0, 'r'): [4, 0, 4]}, dtype='float') # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp, o) self.assertEqual(['a', 'b'], pipe.plug.inputs) self.assertEqual(['d', 'r'], pipe.plug.outputs) @@ -191,7 +191,7 @@ def test_link_pipeline_restricted_to_free(self): exp = pd.DataFrame({(0, 'd'): [4, 8, 10], (0, 'r'): [4, 0, 4]}, dtype='float') # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp, o) self.assertEqual(['a', 'b'], pipe.plug.inputs) self.assertEqual(['d', 'r'], pipe.plug.outputs) @@ -205,7 +205,7 @@ def test_link_pipeline_restricted_to_restricted(self): exp = pd.DataFrame({(0, 'd'): [2, 4, 5], (0, '-d'): [-2, -4, -5], (0, 'r'): [2, 0, 2]}, dtype='float') # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp, o) self.assertEqual({'a', 'b'}, set(pipe.plug.inputs)) self.assertEqual({'d', '-d', 'r'}, set(pipe.plug.outputs)) @@ -227,14 +227,14 @@ def test_compute(self): (1, 'a'): [20, 40, 60], (1, 'b'): [80, 100, 120]}) # Test & Verify - o = stage.compute(i) + o = stage(i) pd.testing.assert_frame_equal(exp, o) def test_wrong_compute(self): i = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]}) pipe = Inverse() - self.assertRaises(ValueError, lambda: pipe.compute(i)) + self.assertRaises(ValueError, lambda: pipe(i)) def test_standardize_column(self): i = pd.DataFrame({'a': [1, 2, 3]}) @@ -270,7 +270,7 @@ def test_compute(self): (1, 'd'): [4, 2, 2], (1, 'r'): [0, 10, 0]}, dtype='float') # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp, o) @@ -285,7 +285,7 @@ def test_compute(self): exp = pd.DataFrame({(0, 'a'): [12, 50, 50, 12], (0, 'b'): [50, 23, 50, 10]}) # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp, o) @@ -294,13 +294,13 @@ def test_compute(self): # Input i = pd.DataFrame({'a': [12, 54, 87, 12], 'b': [98, 23, 65, 4]}) - pipe = Rename({'a': 'alpha'}) + pipe = Rename(a='alpha') # Expected exp = pd.DataFrame({(0, 'alpha'): [12, 54, 87, 12], (0, 'b'): [98, 23, 65, 4]}) # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp, o) @@ -315,7 +315,7 @@ def test_compute(self): exp = pd.DataFrame({(0, 'a'): [12, 54, 87, 12]}) # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp, o) @@ -332,7 +332,7 @@ def test_compute(self): exp_total_loss = exp_time_down * pipe.loss # Test & Verify - o = pipe.compute(i) + o = pipe(i) time_down = o.where(o < power).dropna().size self.assertAlmostEqual(exp_time_down, time_down, delta=exp_time_down*0.1) @@ -357,8 +357,8 @@ def test_compute(self): (3, 'a'): [12, 54, 87, 12], (3, 'b'): [98, 23, 65, 4]}) # Test & Verify - o = pipe.compute(i) + o = pipe(i) pd.testing.assert_frame_equal(exp1, o) - o = pipe.compute(o) + o = pipe(o) pd.testing.assert_frame_equal(exp2, o) \ No newline at end of file diff --git a/tests/workflow/test_shuffler.py b/tests/workflow/test_shuffler.py index 0192881..105b8c7 100644 --- a/tests/workflow/test_shuffler.py +++ b/tests/workflow/test_shuffler.py @@ -27,7 +27,7 @@ def __init__(self, return_value): self.return_value = return_value self.input = None - def compute(self, timeline): + def __call__(self, timeline): self.input = timeline return self.return_value