Skip to content

Commit

Permalink
Update XAIBase to v3.0.0 (#13)
Browse files Browse the repository at this point in the history
* Update XAIBase to v3.0.0

* Update `AbstractNeuronSelector` to `AbstractOutputSelector`

* Update README to load VisionHeatmaps

* Load VisionHeatmaps in docs

* Add `LayerNormRule` to rule overview
  • Loading branch information
adrhill authored Feb 21, 2024
1 parent 56127f0 commit d4857bb
Show file tree
Hide file tree
Showing 11 changed files with 30 additions and 21 deletions.
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "RelevancePropagation"
uuid = "0be6dd02-ae9e-43eb-b318-c6e81d6890d8"
authors = ["Adrian Hill <gh@adrianhill.de>"]
version = "1.1.0"
version = "2.0.0-DEV"

[deps]
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
Expand All @@ -20,6 +20,6 @@ Markdown = "1"
Random = "1"
Reexport = "1"
Statistics = "1"
XAIBase = "1.3"
XAIBase = "3"
Zygote = "0.6"
julia = "1.6"
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ using a pre-trained VGG16 model from [Metalhead.jl](https://github.com/FluxML/Me

```julia
using RelevancePropagation
using VisionHeatmaps # visualization of explanations as heatmaps
using Flux
using Metalhead # pre-trained vision models

Expand All @@ -40,7 +41,7 @@ input = ... # input in WHCN format
composite = EpsilonPlusFlat()
analyzer = LRP(model, composite)
expl = analyze(input, analyzer) # or: expl = analyzer(input)
heatmap(expl) # Show heatmap
heatmap(expl) # show heatmap using VisionHeatmaps.jl

```

Expand All @@ -64,8 +65,6 @@ whereas regions in blue are of negative relevance.
| `LRP` with `EpsilonGammaBox` composite | ![][castle-lrp-egb] | ![][streetsign-lrp-egb] |
| `LRP` | ![][castle-lrp] | ![][streetsign-lrp] |



## Acknowledgements
> Adrian Hill acknowledges support by the Federal Ministry of Education and Research (BMBF)
> for the Berlin Institute for the Foundations of Learning and Data (BIFOLD) (01IS18037A).
Expand Down
4 changes: 4 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,7 @@ ImageShow = "4e3cecfd-b093-5904-9786-8bbb286a6a31"
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
RelevancePropagation = "0be6dd02-ae9e-43eb-b318-c6e81d6890d8"
VisionHeatmaps = "27106da1-f8bc-4ca8-8c66-9b8289f1e035"

[compat]
VisionHeatmaps = "1.4"
7 changes: 6 additions & 1 deletion docs/src/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,14 @@ All methods in RelevancePropagation.jl work by calling `analyze` on an input and
```@docs
analyze
Explanation
heatmap
```

For heatmapping functionality, take a look at either
[VisionHeatmaps.jl](https://julia-xai.github.io/XAIDocs/VisionHeatmaps/stable/) or
[TextHeatmaps.jl](https://julia-xai.github.io/XAIDocs/TextHeatmaps/stable/).
Both provide `heatmap` methods for visualizing explanations,
either for images or text, respectively.

## LRP analyzer
```@docs
LRP
Expand Down
7 changes: 6 additions & 1 deletion docs/src/literate/basics.jl
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,12 @@ convert2image(MNIST, x)
analyzer = LRP(model)

# This ana lyzer will return heatmaps that look identical to the `InputTimesGradient` analyzer
# from [ExplainableAI.jl](https://github.com/Julia-XAI/ExplainableAI.jl):
# from [ExplainableAI.jl](https://github.com/Julia-XAI/ExplainableAI.jl).
# We can visualize `Explanation`s by computing a `heatmap` using either
# [VisionHeatmaps.jl](https://julia-xai.github.io/XAIDocs/VisionHeatmaps/stable/) or
# [TextHeatmaps.jl](https://julia-xai.github.io/XAIDocs/TextHeatmaps/stable/),
# either for images or text, respectively.
using VisionHeatmaps

heatmap(input, analyzer)

Expand Down
2 changes: 2 additions & 0 deletions docs/src/literate/crp.jl
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ features = IndexedFeatures(1, 2, 10)
# ## Step 3: Use CRP analyzer
# We can now create a [`CRP`](@ref) analyzer
# and use it like any other analyzer from RelevancePropagation.jl:
using VisionHeatmaps

analyzer = CRP(lrp_analyzer, feature_layer, features)
heatmap(input, analyzer)

Expand Down
4 changes: 3 additions & 1 deletion docs/src/literate/custom_rules.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@

# We start out by loading the same pre-trained LeNet-5 model and MNIST input data:
using RelevancePropagation
using VisionHeatmaps
using Flux
using MLDatasets
using ImageCore
Expand Down Expand Up @@ -63,7 +64,8 @@ rules = [
ZeroRule(),
]
analyzer = LRP(model, rules)
heatmap(input, analyzer)

heatmap(input, analyzer) # using VisionHeatmaps.jl

# We just implemented our own version of the ``γ``-rule in 2 lines of code.
# The heatmap perfectly matches the pre-implemented `GammaRule`:
Expand Down
3 changes: 2 additions & 1 deletion docs/src/rules.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ ZBoxRule
```

## Specialized rules
```@docs; canonical=false
```@docs; canonical=false
LayerNormRule
GeneralizedGammaRule
```
2 changes: 1 addition & 1 deletion src/crp.jl
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ end
# Call to CRP analyzer #
#======================#

function (crp::CRP)(input::AbstractArray{T,N}, ns::AbstractNeuronSelector) where {T,N}
function (crp::CRP)(input::AbstractArray{T,N}, ns::AbstractOutputSelector) where {T,N}
rules = crp.lrp.rules
layers = crp.lrp.model.layers
modified_layers = crp.lrp.modified_layers
Expand Down
4 changes: 2 additions & 2 deletions src/lrp.jl
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ LRP(model::Chain, c::Composite; kwargs...) = LRP(model, lrp_rules(model, c); kwa
#==========================#

function (lrp::LRP)(
input::AbstractArray, ns::AbstractNeuronSelector; layerwise_relevances=false
input::AbstractArray, ns::AbstractOutputSelector; layerwise_relevances=false
)
as = get_activations(lrp.model, input) # compute activations aᵏ for all layers k
Rs = similar.(as) # allocate relevances Rᵏ for all layers k
Expand All @@ -69,7 +69,7 @@ end

get_activations(model, input) = (input, Flux.activations(model, input)...)

function mask_output_neuron!(R_out, a_out, ns::AbstractNeuronSelector)
function mask_output_neuron!(R_out, a_out, ns::AbstractOutputSelector)
fill!(R_out, 0)
idx = ns(a_out)
R_out[idx] .= 1
Expand Down
9 changes: 0 additions & 9 deletions test/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,3 @@ Suppressor = "fd094767-a336-5f1f-9728-57cf17d0bbfb"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
XAIBase = "9b48221d-a747-4c1b-9860-46a1d8ba24a7"

[compat]
Aqua = "0.8"
Flux = "0.13, 0.14"
JLD2 = "0.4"
Metalhead = "0.8 - 0.9.1"
NNlib = "0.8, 0.9"
ReferenceTests = "0.10"
Suppressor = "0.2"
XAIBase = "1.2"

0 comments on commit d4857bb

Please sign in to comment.