Skip to content

Commit 5ce82ed

Browse files
authored
Merge pull request #340 from elfi-dev/dev
Release v.0.7.7
2 parents 823d34b + 9ae457f commit 5ce82ed

26 files changed

+353
-131
lines changed

.gitignore

+6
Original file line numberDiff line numberDiff line change
@@ -105,3 +105,9 @@ ENV/
105105
*.swp
106106

107107
notebooks/mydask.png
108+
109+
# vscode-settings
110+
.vscode
111+
112+
# dask
113+
dask-worker-space

.readthedocs.yml

+24
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# .readthedocs.yml
2+
# Read the Docs configuration file
3+
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
4+
5+
# Required
6+
version: 2
7+
8+
# Build documentation in the docs/ directory with Sphinx
9+
sphinx:
10+
configuration: docs/conf.py
11+
12+
# Build documentation with MkDocs
13+
#mkdocs:
14+
# configuration: mkdocs.yml
15+
16+
# Optionally build your docs in additional formats such as PDF
17+
formats:
18+
- pdf
19+
20+
# Optionally set the version of Python and requirements required to build your docs
21+
python:
22+
version: 3.5
23+
install:
24+
- requirements: requirements.txt

.travis.yml

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@ matrix:
22
include:
33
- os: linux
44
language: python
5-
python: 3.5
5+
python: 3.6
66
- os: linux
77
language: python
8-
python: 3.6
8+
python: 3.7
99
- os: osx
1010
language: generic
1111
before_install:
@@ -25,6 +25,6 @@ install:
2525
- pip install -e .
2626

2727
script:
28-
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then ipcluster start -n 2 --daemon ; fi
28+
- if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then ipcluster start -n 2 --daemonize ; fi
2929
#- travis_wait 20 make test
3030
- make test

CHANGELOG.rst

+13
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,19 @@
11
Changelog
22
=========
33

4+
5+
0.7.7 (2020-10-12)
6+
------------------
7+
- Update info to reflect setting python 3.6 as the default version
8+
- Update documentation to setting python 3.6 as default
9+
- Add dask support to elfi client options
10+
- Add python 3.7 to travis tests and remove python 3.5 due to clash with dask
11+
- Modify progress bar to better indicate ABC-SMC inference status
12+
- Change networkx support from 1.X to 2.X
13+
- Improve docstrings in elfi.methods.bo.acquisition
14+
- Fix readthedocs-build by adding .readthedocs.yml and restricting the build to
15+
python3.5, for now
16+
417
0.7.6 (2020-08-29)
518
------------------
619
- Fix incompatibility with scipy>1.5 in bo.utils.stochastic_optimization

CONTRIBUTING.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -75,10 +75,10 @@ Ready to contribute? Here's how to set up `ELFI` for local development.
7575
$ python -V
7676

7777
4. Install your local copy and the development requirements into a conda
78-
environment. You may need to replace "3.5" in the first line with the python
78+
environment. You may need to replace "3.6" in the first line with the python
7979
version printed in the previous step::
8080

81-
$ conda create -n elfi python=3.5 numpy
81+
$ conda create -n elfi python=3.6 numpy
8282
$ source activate elfi
8383
$ cd elfi
8484
$ make dev
@@ -127,7 +127,7 @@ Before you submit a pull request, check that it meets these guidelines:
127127
2. If the pull request adds functionality, the docs should be updated. Put
128128
your new functionality into a function with a docstring, and add the
129129
feature to the list in README.rst.
130-
3. The pull request should work for Python 3.5 and later. Check
130+
3. The pull request should work for Python 3.6 and later. Check
131131
https://travis-ci.org/elfi-dev/elfi/pull_requests
132132
and make sure that the tests pass for all supported Python versions.
133133

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
**Version 0.7.6 released!** See the [CHANGELOG](CHANGELOG.rst) and [notebooks](https://github.com/elfi-dev/notebooks).
1+
**Version 0.7.7 released!** See the [CHANGELOG](CHANGELOG.rst) and [notebooks](https://github.com/elfi-dev/notebooks).
22

33
**NOTE:** For the time being NetworkX 2 is incompatible with ELFI.
44

@@ -40,7 +40,7 @@ is preferable.
4040
Installation
4141
------------
4242

43-
ELFI requires Python 3.5 or greater. You can install ELFI by typing in your terminal:
43+
ELFI requires Python 3.6 or greater. You can install ELFI by typing in your terminal:
4444

4545
```
4646
pip install elfi
@@ -70,7 +70,7 @@ with your default Python environment and can easily use different versions of Py
7070
in different projects. You can create a virtual environment for ELFI using anaconda with:
7171

7272
```
73-
conda create -n elfi python=3.5 numpy
73+
conda create -n elfi python=3.6 numpy
7474
source activate elfi
7575
pip install elfi
7676
```

docs/installation.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Installation
44
============
55

6-
ELFI requires Python 3.5 or greater (see below how to install). To install ELFI, simply
6+
ELFI requires Python 3.6 or greater (see below how to install). To install ELFI, simply
77
type in your terminal:
88

99
.. code-block:: console
@@ -18,16 +18,16 @@ process.
1818
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
1919

2020

21-
Installing Python 3.5
21+
Installing Python 3.6
2222
---------------------
2323

2424
If you are new to Python, perhaps the simplest way to install it is with Anaconda_ that
25-
manages different Python versions. After installing Anaconda, you can create a Python 3.5.
25+
manages different Python versions. After installing Anaconda, you can create a Python 3.6.
2626
environment with ELFI:
2727

2828
.. code-block:: console
2929
30-
conda create -n elfi python=3.5 numpy
30+
conda create -n elfi python=3.6 numpy
3131
source activate elfi
3232
pip install elfi
3333
@@ -51,7 +51,7 @@ Resolving these may sometimes go wrong:
5151
* If you receive an error about missing ``numpy``, please install it first.
5252
* If you receive an error about `yaml.load`, install ``pyyaml``.
5353
* On OS X with Anaconda virtual environment say `conda install python.app` and then use `pythonw` instead of `python`.
54-
* Note that ELFI requires Python 3.5 or greater
54+
* Note that ELFI requires Python 3.6 or greater
5555
* In some environments ``pip`` refers to Python 2.x, and you have to use ``pip3`` to use the Python 3.x version
5656
* Make sure your Python installation meets the versions listed in requirements_.
5757

docs/quickstart.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ Quickstart
33

44
First ensure you have
55
`installed <http://elfi.readthedocs.io/en/stable/installation.html>`__
6-
Python 3.5 (or greater) and ELFI. After installation you can start using
6+
Python 3.6 (or greater) and ELFI. After installation you can start using
77
ELFI:
88

99
.. code:: ipython3

elfi/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,4 +26,4 @@
2626
__email__ = 'elfi-support@hiit.fi'
2727

2828
# make sure __version_ is on the last non-empty line (read by setup.py)
29-
__version__ = '0.7.6'
29+
__version__ = '0.7.7'

elfi/client.py

+8-2
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,8 @@ def submit(self, batch=None):
159159
loaded_net = self.client.load_data(self.compiled_net, self.context, batch_index)
160160
# Override
161161
for k, v in batch.items():
162-
loaded_net.node[k] = {'output': v}
162+
loaded_net.nodes[k].update({'output': v})
163+
del loaded_net.nodes[k]['operation']
163164

164165
task_id = self.client.submit(loaded_net)
165166
self._pending_batches[batch_index] = task_id
@@ -299,7 +300,12 @@ def compile(cls, source_net, outputs=None):
299300
outputs = source_net.nodes()
300301
if not outputs:
301302
logger.warning("Compiling for no outputs!")
302-
outputs = outputs if isinstance(outputs, list) else [outputs]
303+
if isinstance(outputs, list):
304+
outputs = set(outputs)
305+
elif isinstance(outputs, type(source_net.nodes())):
306+
outputs = outputs
307+
else:
308+
outputs = [outputs]
303309

304310
compiled_net = nx.DiGraph(
305311
outputs=outputs, name=source_net.graph['name'], observed=source_net.graph['observed'])

elfi/clients/dask.py

+111
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,111 @@
1+
"""This module implements a multiprocessing client using dask."""
2+
3+
import itertools
4+
import os
5+
6+
from dask.distributed import Client as DaskClient
7+
8+
import elfi.client
9+
10+
11+
def set_as_default():
12+
"""Set this as the default client."""
13+
elfi.client.set_client()
14+
elfi.client.set_default_class(Client)
15+
16+
17+
class Client(elfi.client.ClientBase):
18+
"""A multiprocessing client using dask."""
19+
20+
def __init__(self):
21+
"""Initialize a dask client."""
22+
self.dask_client = DaskClient()
23+
self.tasks = {}
24+
self._id_counter = itertools.count()
25+
26+
def apply(self, kallable, *args, **kwargs):
27+
"""Add `kallable(*args, **kwargs)` to the queue of tasks. Returns immediately.
28+
29+
Parameters
30+
----------
31+
kallable: callable
32+
33+
Returns
34+
-------
35+
task_id: int
36+
37+
"""
38+
task_id = self._id_counter.__next__()
39+
async_result = self.dask_client.submit(kallable, *args, **kwargs)
40+
self.tasks[task_id] = async_result
41+
return task_id
42+
43+
def apply_sync(self, kallable, *args, **kwargs):
44+
"""Call and returns the result of `kallable(*args, **kwargs)`.
45+
46+
Parameters
47+
----------
48+
kallable: callable
49+
50+
"""
51+
return self.dask_client.run_on_scheduler(kallable, *args, **kwargs)
52+
53+
def get_result(self, task_id):
54+
"""Return the result from task identified by `task_id` when it arrives.
55+
56+
Parameters
57+
----------
58+
task_id: int
59+
60+
Returns
61+
-------
62+
dict
63+
64+
"""
65+
async_result = self.tasks.pop(task_id)
66+
return async_result.result()
67+
68+
def is_ready(self, task_id):
69+
"""Return whether task with identifier `task_id` is ready.
70+
71+
Parameters
72+
----------
73+
task_id: int
74+
75+
Returns
76+
-------
77+
bool
78+
79+
"""
80+
return self.tasks[task_id].done()
81+
82+
def remove_task(self, task_id):
83+
"""Remove task with identifier `task_id` from scheduler.
84+
85+
Parameters
86+
----------
87+
task_id: int
88+
89+
"""
90+
async_result = self.tasks.pop(task_id)
91+
if not async_result.done():
92+
async_result.cancel()
93+
94+
def reset(self):
95+
"""Stop all worker processes immediately and clear pending tasks."""
96+
self.dask_client.shutdown()
97+
self.tasks.clear()
98+
99+
@property
100+
def num_cores(self):
101+
"""Return the number of processes.
102+
103+
Returns
104+
-------
105+
int
106+
107+
"""
108+
return os.cpu_count()
109+
110+
111+
set_as_default()

elfi/compiler.py

+12-13
Original file line numberDiff line numberDiff line change
@@ -54,8 +54,8 @@ def compile(cls, source_net, compiled_net):
5454
compiled_net.add_edges_from(source_net.edges(data=True))
5555

5656
# Compile the nodes to computation nodes
57-
for name, data in compiled_net.nodes_iter(data=True):
58-
state = source_net.node[name]
57+
for name, data in compiled_net.nodes(data=True):
58+
state = source_net.nodes[name]['attr_dict']
5959
if '_output' in state and '_operation' in state:
6060
raise ValueError("Cannot compile: both _output and _operation present "
6161
"for node '{}'".format(name))
@@ -92,7 +92,7 @@ def compile(cls, source_net, compiled_net):
9292
uses_observed = []
9393

9494
for node in nx.topological_sort(source_net):
95-
state = source_net.node[node]
95+
state = source_net.nodes[node]['attr_dict']
9696
if state.get('_observable'):
9797
observable.append(node)
9898
cls.make_observed_copy(node, compiled_net)
@@ -113,14 +113,14 @@ def compile(cls, source_net, compiled_net):
113113
else:
114114
link_parent = parent
115115

116-
compiled_net.add_edge(link_parent, obs_node, source_net[parent][node].copy())
116+
compiled_net.add_edge(link_parent, obs_node, **source_net[parent][node].copy())
117117

118118
# Check that there are no stochastic nodes in the ancestors
119119
for node in uses_observed:
120120
# Use the observed version to query observed ancestors in the compiled_net
121121
obs_node = observed_name(node)
122122
for ancestor_node in nx.ancestors(compiled_net, obs_node):
123-
if '_stochastic' in source_net.node.get(ancestor_node, {}):
123+
if '_stochastic' in source_net.nodes.get(ancestor_node, {}):
124124
raise ValueError("Observed nodes must be deterministic. Observed "
125125
"data depends on a non-deterministic node {}."
126126
.format(ancestor_node))
@@ -148,11 +148,10 @@ def make_observed_copy(cls, node, compiled_net, operation=None):
148148
raise ValueError("Observed node {} already exists!".format(obs_node))
149149

150150
if operation is None:
151-
compiled_dict = compiled_net.node[node].copy()
151+
compiled_dict = compiled_net.nodes[node].copy()
152152
else:
153153
compiled_dict = dict(operation=operation)
154-
155-
compiled_net.add_node(obs_node, compiled_dict)
154+
compiled_net.add_node(obs_node, **compiled_dict)
156155
return obs_node
157156

158157

@@ -176,8 +175,8 @@ def compile(cls, source_net, compiled_net):
176175
instruction_node_map = dict(_uses_batch_size='_batch_size', _uses_meta='_meta')
177176

178177
for instruction, _node in instruction_node_map.items():
179-
for node, d in source_net.nodes_iter(data=True):
180-
if d.get(instruction):
178+
for node, d in source_net.nodes(data=True):
179+
if d['attr_dict'].get(instruction):
181180
if not compiled_net.has_node(_node):
182181
compiled_net.add_node(_node)
183182
compiled_net.add_edge(_node, node, param=_node[1:])
@@ -203,8 +202,8 @@ def compile(cls, source_net, compiled_net):
203202
logger.debug("{} compiling...".format(cls.__name__))
204203

205204
_random_node = '_random_state'
206-
for node, d in source_net.nodes_iter(data=True):
207-
if '_stochastic' in d:
205+
for node, d in source_net.nodes(data=True):
206+
if '_stochastic' in d['attr_dict']:
208207
if not compiled_net.has_node(_random_node):
209208
compiled_net.add_node(_random_node)
210209
compiled_net.add_edge(_random_node, node, param='random_state')
@@ -230,7 +229,7 @@ def compile(cls, source_net, compiled_net):
230229

231230
outputs = compiled_net.graph['outputs']
232231
output_ancestors = nbunch_ancestors(compiled_net, outputs)
233-
for node in compiled_net.nodes():
232+
for node in list(compiled_net.nodes()):
234233
if node not in output_ancestors:
235234
compiled_net.remove_node(node)
236235
return compiled_net

0 commit comments

Comments
 (0)