Skip to content

Added new time profiler HandlersTimeProfiler which allows per handler time profiling #1398

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
Oct 31, 2020
Merged

Conversation

harsh8398
Copy link
Contributor

@harsh8398 harsh8398 commented Oct 23, 2020

Fixes #1346

Description:

  • Added HandlersTimeProfiler which allows per handler time profiling
  • Added test cases and updated docs

Output of the print_results() is as follows:

-----------------------------------------  -----------------------  --------------  --------------  --------------  --------------  --------------
Handler                                    Event Name                     Total(s)      Min(s)/IDX      Max(s)/IDX         Mean(s)          Std(s)
-----------------------------------------  -----------------------  --------------  --------------  --------------  --------------  --------------
run.<locals>.log_training_results          EPOCH_COMPLETED                 9.69906       None/None       None/None            None            None
run.<locals>.log_validation_results        EPOCH_COMPLETED                 1.27098       None/None       None/None            None            None
run.<locals>.log_time                      EPOCH_COMPLETED                 0.00023       None/None       None/None            None            None
run.<locals>.log_intermediate_results      EPOCH_COMPLETED           not triggered       None/None       None/None            None            None
run.<locals>.log_training_loss             ITERATION_COMPLETED             0.02946       1e-05/874     0.00028/519           3e-05           7e-05
run.<locals>.log_time                      COMPLETED                 not triggered       None/None       None/None            None            None
-----------------------------------------  -----------------------  --------------  --------------  --------------  --------------  --------------
Total                                                                     10.99972
-----------------------------------------  -----------------------  --------------  --------------  --------------  --------------  --------------
Processing took total 5.67398s [min/index: 0.00443s/937, max/index: 0.00784s/0, mean: 0.00605s, std: 0.00035s]
Dataflow took total 8.15311s [min/index: 0.00538s/936, max/index: 0.01123s/937, mean: 0.00869s, std: 0.00112s]

Check list:

  • New tests are added (if a new feature is added)
  • New doc strings: description and/or example code are in RST format
  • Documentation is updated (if required)

I have trimmed the output in the docstrings and doc for the table to make it not break lines and look pretty.

@harsh8398 harsh8398 marked this pull request as draft October 23, 2020 18:44
Copy link
Collaborator

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @harsh8398 !
This is not as simple as I thought, but I'm sure we can make it work after several iterations.

@vfdev-5 vfdev-5 added the hacktoberfest-accepted For accepted PRs label Oct 23, 2020
@harsh8398 harsh8398 requested a review from vfdev-5 October 25, 2020 13:38
@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 25, 2020

@harsh8398 thanks for the update and pinging me. I'll try to checkout your code and play around to understand better how it works and if everything works as intended etc.

Meanwhile, if you could also test the profiler on a concrete example, e.g. examples/contrib/cifar10. For example, you can add this handler in the existing code and run it (eventually, you can reduce the number of epochs to 7 or 10) and see if everything works as intended. It would be very helpful 👍

@harsh8398
Copy link
Contributor Author

@vfdev-5 Thanks, Okay I'll test it on cifar10 example. I did try MNIST before. Also FYI, there are some things left to do like:

  • threshold for event filters
  • export results to csv

There are some tests added but I think I can write some better tests for this. I'll update/add the tests at the end once the profiler code is finalized.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 25, 2020

Okay I'll test it on cifar10 example. I did try MNIST before

Oh, that's great ! Maybe, cifar10 checking is optional then. However, what I was thinking and which can be done in another follow-up PR, is providing a notebook with a basic training and showing usage of this class... Such that users could have a working example...

@harsh8398
Copy link
Contributor Author

harsh8398 commented Oct 25, 2020

Oh, that's great ! Maybe, cifar10 checking is optional then. However, what I was thinking and which can be done in another follow-up PR, is providing a notebook with a basic training and showing usage of this class... Such that users could have a working example...

Should I open a follow up issue on that? I can work on that after this.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 25, 2020

Should I open a follow up issue on that? I can work on that after this.

Yes, please.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 26, 2020

@harsh8398 I checked the code and I have few questions and comments:

  • code already works with custom events, that's pretty cool.
my code
import time
from ignite.engine import EventEnum


class BackpropEvents(EventEnum):
    OPTIM_STEP_COMPLETED = 'optim_step_completed'



def train_step(e, b):
    time.sleep(0.1)
    engine.fire_event(BackpropEvents.OPTIM_STEP_COMPLETED)


engine = Engine(train_step)

engine.register_events("a", "b")
engine.register_events(*BackpropEvents)


@engine.on(BackpropEvents.OPTIM_STEP_COMPLETED)
def function_before_backprop(engine):
    time.sleep(0.0333)


@engine.on(Events.EPOCH_STARTED)
def ep_start():
    time.sleep(0.15)

    
@engine.on(Events.EPOCH_COMPLETED)
def ep_end():
    time.sleep(0.2)

a_ev_args1 = [12, 34]

@engine.on(Events.EPOCH_COMPLETED(every=5), a_ev_args1)
def trigger_a_event(args1):
    engine.fire_event("a")

a_ev_args2 = [122, 324]
    
@engine.on("a", a_ev_args2)
def on_a_event(args2):
    print("triggered 'a' event", args2)
    time.sleep(1.0)


profiler = HandlersTimeProfiler()
profiler.attach(engine)


@engine.on(Events.EPOCH_COMPLETED(every=3) | Events.EPOCH_COMPLETED(once=1))
def print_profiling():
    profiler.print_results(profiler.get_results())

def get_data(n_iters):
    for i in range(n_iters):
        time.sleep(0.05)
        yield i

        
data = get_data(5 * 10)

engine.run(data, max_epochs=5, epoch_length=5)

print("Continue")

engine.run(data, max_epochs=10, epoch_length=5)
  • I understand that below condition helps to avoid std to be Nan:
            min_index = ("None", "None")
            max_index = ("None", "None")
            mean = "None"
            std = "None"
            if len(data) > 1:

on the other hand, when we see None for triggered events, it is not obvious to understand why:

ep_start                      EPOCH_STARTED                    0.15022       None/None       None/None            None            None  
ep_end                        EPOCH_COMPLETED                  0.20028       None/None       None/None            None            None  
trigger_a_event               EPOCH_COMPLETED                    3e-05       None/None       None/None            None            None  

How about doing like that

            min_index = ("None", "None")
            max_index = ("None", "None")
            mean = "None"
            std = "None"
            if len(data) > 0:
                min_index = (round(torch.min(data).item(), 5), torch.argmin(data).item())
                max_index = (round(torch.max(data).item(), 5), torch.argmax(data).item())
                mean = round(torch.mean(data).item(), 5)
                if len(data) > 1:
                    std = round(torch.std(data).item(), 5)

I'll comment out also in the code.

@harsh8398 harsh8398 requested a review from vfdev-5 October 30, 2020 19:07
@harsh8398
Copy link
Contributor Author

@vfdev-5 I'll start with mypy, test cases and maybe some doc changes. Please let me know if there are any more improvements needed in profiler code.

@vfdev-5 vfdev-5 marked this pull request as ready for review October 31, 2020 00:14
@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 31, 2020

@harsh8398 Looks awesome now !
A little improvement for filtering timers and good to go.

For instance we'll keep this behaviour with thresholding of filtered handlers. In future, I think, it would be better to replace the way we wrap handlers to filter the execution, such that we could easily replace/wrap original handler inside filtering logic. Such that we could add timeit wrapper to the original handler instead of filtered. Anyway, this is something to think about.

Copy link
Collaborator

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @harsh8398 !

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 31, 2020

@harsh8398 could you please take a look how to fix the rendering in another PR ?

Screen Shot 2020-10-31 at 14 46 43

@harsh8398
Copy link
Contributor Author

@vfdev-5 I'll have to trim it down to fix it. From my monitor:
image

Does sphinx allow scrollbar?

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 31, 2020

@harsh8398 looks good with the trimming.

Does sphinx allow scrollbar?

For formula or code block I'm note sure...

@harsh8398
Copy link
Contributor Author

@harsh8398 looks good with the trimming.

Its already trimmed actually. I think you are looking docs at lower resolution than my monitor.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 31, 2020

Its already trimmed actually. I think you are looking docs at lower resolution than my monitor.

That's possible. However when I zoom in/out with browser, layout remins the same. What is your screen resolution, actually ?

Btw, Python 3.5 CI is failing: https://github.com/pytorch/ignite/pull/1398/checks?check_run_id=1336007242#step:6:1091

@trsvchn could you please check how it is rendered in your case and see if we could tweak css a bit such that the text appears a bit smaller, if it needed ?

@trsvchn
Copy link
Collaborator

trsvchn commented Oct 31, 2020

@harsh8398 looks good with the trimming.

Does sphinx allow scrollbar?

For formula or code block I'm note sure...

The same, I'm not sure...

@trsvchn
Copy link
Collaborator

trsvchn commented Oct 31, 2020

@vfdev-5 on my dimensions: 1366x768 pixels (361x203 millimeters) and Firefox it looks as follows

With 100% zoom (no zoom)
100

But with 120% zoom it's okay
120

I'll check what we can do

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 31, 2020

@trsvchn that's true, nice catch with 120% ! I tried that and also see the correct rendering. Maybe, we can try to make central window a bit larger and thus reduce the size of the left sidebar.

@trsvchn
Copy link
Collaborator

trsvchn commented Oct 31, 2020

@vfdev-5 No worries, we can fix that with css tweaks, by changing font size of a code block for that class.

Screenshot from 2020-10-31 17-50-57

You can see that only example code block has changed.

@trsvchn
Copy link
Collaborator

trsvchn commented Oct 31, 2020

Here is a test on master branch
Screenshot from 2020-10-31 17-48-00
Screenshot from 2020-10-31 17-48-04

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Oct 31, 2020

Looks neat ! Thanks @trsvchn

@vfdev-5 vfdev-5 merged commit 9af2121 into pytorch:master Oct 31, 2020
vfdev-5 added a commit to vfdev-5/ignite that referenced this pull request Nov 24, 2020
* Update TQDM to > 4.48.0 (pytorch#1339)

* Fix tqdm 4.49.0 tests.

* update requirements.

* Use distutils instead of packaging for getting package version.

* Reformat code.

* Reformat using black 19.10b0.

Co-authored-by: vfdev <[email protected]>

* Activate mypy in ignite.utils (pytorch#1352)

* Activate mypy in ignite.utils

* Add missing torch package to lint stage

* Move mypy check to test stage

Co-authored-by: vfdev <[email protected]>

* Exposed EventEnum in docs and added import statement in its examples (pytorch#1345) (pytorch#1353)

* Update Shpinx to v3.1. (pytorch#1356)

Fixes pytorch#1272

* Update README.md

* Update doc of examples for Trains fileserver setup (pytorch#1360)

* update doc for trains fileserver setup

* replace github issue by documentation

Co-authored-by: Desroziers <[email protected]>

* Updated commit fix (pytorch#1361) (pytorch#1364)

* Updated commit fix

* Update code-style.yml

Co-authored-by: rex_divakar <[email protected]>

* added tuple type to mixins.py (pytorch#1365)

* added tuple type to mixins.py

* allow mypy to pass through base file

* fixed linting issues

Co-authored-by: vfdev <[email protected]>

* Activate mypy in ignite.distributed (pytorch#1355)

* Activate mypy in ignite.distributed

* Fix tests & py3.5 inline type hints

* Remove typing,overload

* Fix multiple typing issues

* Fix typing issues

* Fix TPU test

Co-authored-by: vfdev <[email protected]>

* Improve typing for ignite.handlers module (1343) (pytorch#1349)

* Improve typing for ignite.handlers module (1343)

* autopep8 fix

* Fix typing for py35, remove handlers block from mypy.ini

* Add exception to ModelCheckpoint when saving last checkpoint

* Add test for ModelCheckpoint with redefined save_handler case

* autopep8 fix

Co-authored-by: AutoPEP8 <>
Co-authored-by: Sylvain Desroziers <[email protected]>
Co-authored-by: vfdev <[email protected]>
Co-authored-by: trsvchn <[email protected]>

* [3] [contrib/metrics] setup typing in contrib part of the library (pytorch#1363)

* [3] [contrib/metrics] setup typing in contrib part of the library

* review changes

* Update gpu_info.py

Co-authored-by: Sylvain Desroziers <[email protected]>
Co-authored-by: vfdev <[email protected]>

* [2] [contrib/handlers] setup typing in contrib part of the library (pytorch#1362)

* [2] [contrib/handlers] setup typing in contrib part of the library

* Fix a typo in tqdm logger

* review comments

* Update mlflow_logger.py

* Update neptune_logger.py

* review changes

* review changes

Co-authored-by: Sylvain Desroziers <[email protected]>
Co-authored-by: vfdev <[email protected]>

* [1] [contrib/engines] setup typing in contrib part of the library  (pytorch#1351)

* setup typing for contribute/engine part of the code

* revert doc string changes

* Update common.py

Co-authored-by: Sylvain Desroziers <[email protected]>
Co-authored-by: vfdev <[email protected]>

* Update PULL_REQUEST_TEMPLATE.md

* Disable cross-ref links for type annotations (pytorch#1374)

* Added reinit__is_reduced and sync_all_reduce docs in metrics doc (pytorch#1373)

* added links to reinit__is_reduced and sync_all_reduce decorators in metrics documentation

* updated order in list of metrics

* deleted decorators from metric list

* Update metrics.rst

Co-authored-by: vfdev <[email protected]>

* warning if current device index is lower than current local rank (pytorch#1335) (pytorch#1376)

* warning if current device index is lower than current local rank (pytorch#1335)

* warning if current device index is lower than current local rank

* Updated code and tests

* Fixed formatting

* Updated code and tests for horovod
- fixed failing test

* Updated tests

Co-authored-by: vfdev-5 <[email protected]>

* Removed debug prints

* Fixed failing hvd tests

Co-authored-by: Sai Sandeep Mutyala <[email protected]>

* Github Actions workflow CI for horovod on CPU (pytorch#1377)

* Initial commit

* Update hvd-tests.yml

* Update hvd-tests.yml

* Update hvd-tests.yml

* trigger GitHub actions

* removed examples

* trigger GitHub actions

* Improve typing of distributed.comp_modules.utils.all_gather (pytorch#1370)

* Improve typing of distributed.comp_modules.utils.all_gather

* Fix all_gather gloo test

* Fix XLA test

Co-authored-by: vfdev <[email protected]>

* Removed state.restart method (pytorch#1385)

* Activate mypy in ignite.engine (pytorch#1379)

* Activate mypy in ignite.engine

* Fix missing import

* Fix typing issues with nighty build

* Fix PR findings

Co-authored-by: Sylvain Desroziers <[email protected]>
Co-authored-by: vfdev <[email protected]>

* add acknowledgment for IFPEN (pytorch#1387)

Co-authored-by: Desroziers <[email protected]>

* Fix collections DeprecationWarning (pytorch#1388)

* Remove deprecated CustomPeriodicEvent from nb example, fix tb OutputHadler (pytorch#1389)

* Update checkpoint.py (pytorch#1394)

* Add GH Action to build and publish Docker images (pytorch#1390)

* ADD GH Action to build and publish Docker images

* Configure GH action for Docker build

* Fix image_tag fetching

* Fix identation for main steps

* Add token envs for GH docker action, fix push all tag_image

* Switch to horovod 0.20.3

* Push images on push events

* Fix if conditional

* Toctrees for classes and methods using sphinx autosummary (pytorch#1393)

* Implement autosummary patch for autolisting

* Fix css for autogenerated tables via autosummary

* Improve autolist feature

* Add toctrees for methods and classes for ignite

* Better import for autolist

* Add toctrees for methods and classes for contrib package

* Fix CSS for autosummary table row height

* Fix warnings raised by toctree

* Remove all deprecated args, kwargs for v0.5.0 (pytorch#1396) (pytorch#1397)

* Remove all deprecated args, kwargs for v0.5.0 (pytorch#1396)

* Improve deprecation message of setup_any_logging (pytorch#1396)

* Update setup.cfg

* Remove module members w/o docstrings from autolist toc tables (pytorch#1401)

* Add --color=yes in pytest config (pytorch#1402)

* removes styling of function descriptions as requested in pytorch#1256 (pytorch#1399)

* removes styling of function descriptions as requested in pytorch#1256

* reverts modifications to the example files

* Skip circle ci and GA builds (pytorch#1400)

* [WIP] Skip circle ci and GA builds

* Fixes swissknife version

* Replaced swissknife by sh script

* [skip ci] Updated trigger_if_modified.sh script
- excluded docs from github actions
- added 1.6.0 to pytorch-version-tests

* Fixes pytorch#1408, XLA failing tests (pytorch#1412)

- Issue is related to xla nightly
- Probably related to pytorch/xla#2576

* Added Mypy check as github action (pytorch#1418)

* Added Mypy check to as github action

* Removed py3.5

* Activate mypy in ignite.metrics (pytorch#1391)

* Activate mypy in ignite.metrics

* remove num_examples check

* fix CR issues

* remove init defaults in Accuracy

* Remove None assignments in __init__

* improved typing connected with mypy issues

Co-authored-by: vfdev <[email protected]>

* Update README.md

Badge travis org -> com

* add tolerance for tpu in r2 and canberra tests (pytorch#1414)

* add tolerance for tpu

* autopep8 fix

* Reverted test_canberra_metric.py as unnecessary

* Update test_r2_score.py

* Update test_r2_score.py

Co-authored-by: Desroziers <[email protected]>
Co-authored-by: sdesrozis <[email protected]>
Co-authored-by: vfdev <[email protected]>

* Activate mypy ignite contrib metrics (pytorch#1419)

* Activate mypy in ignite.contrib.metrics

* Add missing type hints in ignite.contrib.metrics

* Add missing type hints

* Contributing guide (pytorch#1424)

* Add materials on how to setup dev env in CONTRIBUTING guide pytorch#1395
- first draft of first-time contributor guide

* Add materials on how to setup dev env in CONTRIBUTING guide pytorch#1395
- add in Table of Contents

* Add materials on how to setup dev env in CONTRIBUTING guide pytorch#1395
- fix table of contents link

* Add materials on how to setup dev env in CONTRIBUTING guide pytorch#1395
- rollback README.md, remove IDE setting

* Update CONTRIBUTING.md

Co-authored-by: Sylvain Desroziers <[email protected]>
Co-authored-by: vfdev <[email protected]>

* replace Number type with float; remove unneeded type ignores (pytorch#1425)

Co-authored-by: vfdev <[email protected]>

* Update README.md

* Added new time profiler `HandlersTimeProfiler` which allows per handler time profiling (pytorch#1398)

* added new HandlersTimeProfiler with handler level details and added tests for HandlersTimeProfiler (pytorch#1346)

* updated docs and docstring for HandlersTimeProfiler (pytorch#1346)

* updated HandlersTimeProfiler to support any events and updated detach mechanism of profiler (pytorch#1346)

* updated HandlersTimeProfiler with code improvements and implemented csv export method (pytorch#1346)

* updated HandlersTimeProfiler to handle event handler bundle better (pytorch#1346)

* HandlersTimeProfiler: added threshold for filtering profiled time for handlers attached to event with filters (pytorch#1346)

* HandlersTimeProfiler: add tests and type hints (pytorch#1398)

* HandlersTimeProfiler: use FloatTensor for list to tensor conversion (pytorch#1398)

* HandlersTimeProfiler: use torch.tensor for list to tensor conversion (pytorch#1398)

* HandlersTimeProfiler: remove unnecessary import (pytorch#1398)

* HandlersTimeProfiler: move tensor conversion in compute_basic_stats (pytorch#1398)

Co-authored-by: vfdev <[email protected]>

* Fix HandlersTimeProfiler example rendering (pytorch#1428)

* Fix HandlersTimeProfiler example rendering

* Fix WARNINGs: Title underline too short

* Add horizontal scrollbars for examples instead of font-size tweaks

* Enable horizontal scrollbars for examples globally

* Save model with same filename (pytorch#1423)

* save model with same filename

* Update checkpoint.py

* use elif

* refactor to have only one comprehension list

* refactoring

* improve test

* autopep8 fix

Co-authored-by: Desroziers <[email protected]>
Co-authored-by: vfdev <[email protected]>
Co-authored-by: sdesrozis <[email protected]>

* Some docs nit picking (pytorch#1435)

* enable extra flags for stricter type checking (pytorch#1433)

* Fixes pytorch#1426 - distrib cpu tests on win (pytorch#1429)

* Fixes pytorch#1426 - distrib cpu tests on win

* Skip distributed/tpu/multinode_distributed if SKIP_DISTRIB_TESTS=1

* replaced sh by bash

* Update pytorch-version-tests.yml

* Updated files to ignore for GitHub Actions CI (pytorch#1441)

* Added windows gpu test to circle ci (pytorch#1440)

* [WIP] Added windows gpu test to circle ci

* Updated windows config

* Updated conda installation

* Updated miniconda install command

* Removed conda installation

* Updated configuration

* Adding max_iters as an optional arg in Engine run (pytorch#1381)

* initial draft, adding max_iters as optional args in run

* fixed typo

* minor bug fixes

* resolving failing tests

* fixed out-of-place conditional

* typo fix

* updated docstring for 'run'

* added initial tests

* (WIP) restructured creating a new state with max_iters

* updated tests & docstrings

* initial draft, adding max_iters as optional args in run

* fixed typo

* minor bug fixes

* resolving failing tests

* fixed out-of-place conditional

* typo fix

* updated docstring for 'run'

* added initial tests

* (WIP) restructured creating a new state with max_iters

* updated tests & docstrings

* added test to check _is_done

* updating engine loop condition

* autopep8 fix

* linting issues

* fixed mypy errors

* fixed formatting

* minor fix & add test for larger max_iters

* removed unused typechecking

Co-authored-by: thescripted <[email protected]>
Co-authored-by: vfdev <[email protected]>

* Updated circleci trigger_if_modified.sh if on master

* Fix failing distributed ci (pytorch#1445)

* Setup random free port for distrib ci

* autopep8 fix

Co-authored-by: vfdev-5 <[email protected]>

* Added torch.cuda.manual_seed_all(seed) (pytorch#1444)

* fix ReduceOp typing issue (pytorch#1447)

* Update README.md

* Updated miniconda setup (pytorch#1449)

* Fixed broken coverage (pytorch#1451)

* Fixed broken coverage

* Updated hvd-tests.yml

* Migrated nightly build/release to GitHub Actions (pytorch#1448) (pytorch#1450)

* Added nightly build/release action

* Updated yml

* Updated to conda-incubator/setup-miniconda@v2

* Reverted modifications in other actions

* Migrated nightly build/release to GitHub Actions (pytorch#1448)

* Added nightly build/release action

* Updated yml

* Updated to conda-incubator/setup-miniconda@v2

* Reverted modifications in other actions

* Fix PyPi upload

* Finalized binaries-nightly-release.yml

* Updated README

* [contributing] add syncing up with the upstream (pytorch#1452)

* Update setup.cfg

* [contributing] add syncing up with the upstream

* Apply suggestions from code review

Co-authored-by: vfdev <[email protected]>

* Update CONTRIBUTING.md

Co-authored-by: vfdev <[email protected]>

* [ci] create docs.yml

* install torch

* Activate MyPy in ignite.contrib.engines (pytorch#1416)

* Activate mypy in ignite.contrib.engines

* Fix review comments

* fix extra event too

* Update to fix strict errors

* Update quickstart.rst (pytorch#1460)

* Update quickstart.rst

Plz have a look if I am going correct or not. ###rewording sentences to simplify the understanding

* Update docs/source/quickstart.rst

Co-authored-by: vfdev <[email protected]>

* Update quickstart.rst

* Update quickstart.rst

* Update docs/source/quickstart.rst

Co-authored-by: vfdev <[email protected]>

* Update quickstart.rst

Final commit is done. You can review it.

Co-authored-by: vfdev <[email protected]>

* [docs] intersphinx update in conf.py

* [docs] add missing function in handlers docs (pytorch#1463)

* Update setup.cfg

* [docs] missing function in handlers docs

* [docs] add ignite favicon (pytorch#1462)

* Update setup.cfg

* [docs] add ignite favicon

* Add missing classes and links for docs (pytorch#1461)

* Update CONTRIBUTING.md

* Update concepts.rst (pytorch#1465)

* setup toml, yaml, prettier in pre-commit (pytorch#1468)

* Update setup.cfg

* [pre-commit] setup yaml in pre-commit hook

* [pre-commit] setup toml prettier

* [docs] make GIF look good on mobile (pytorch#1470)

* Update setup.cfg

* [docs] make gif fit on mobile

* Update index.rst

* use .. raw:: html

* Update index.rst

Co-authored-by: vfdev <[email protected]>

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* [docs] add submodule in engine.rst (pytorch#1464)

* Update setup.cfg

* [docs] add submodule in engine

* [docs] add suggestions, contrib engine docs and 45% width

* Update ignite_theme.css

* [metrics] speed up SSIM tests (pytorch#1467)

* Update setup.cfg

* [metrics] update ssim

* use np.allclose instead of torch.allclose

* Apply suggestions from code review

* extract into _test_ssim

* extract into scripts

* fix path

* fix path

* fix path

* good to go!

* [ci] universal conda build (pytorch#1471)

* Update setup.cfg

* rm conda_build_config

* rm conda_build_config

* Update docs.yml

* Update install_docs_deps.sh

Co-authored-by: vcarpani <[email protected]>
Co-authored-by: Anton Grübel <[email protected]>
Co-authored-by: Harsh Patel <[email protected]>
Co-authored-by: Théo Dumont <[email protected]>
Co-authored-by: Sylvain Desroziers <[email protected]>
Co-authored-by: Desroziers <[email protected]>
Co-authored-by: rex_divakar <[email protected]>
Co-authored-by: Benjamin Kinga <[email protected]>
Co-authored-by: Taras Savchyn <[email protected]>
Co-authored-by: trsvchn <[email protected]>
Co-authored-by: RaviTeja Pothana <[email protected]>
Co-authored-by: Josseline Perdomo <[email protected]>
Co-authored-by: Sai Sandeep Mutyala <[email protected]>
Co-authored-by: Ramesht Shukla <[email protected]>
Co-authored-by: botbotbot <[email protected]>
Co-authored-by: Jeff Yang <[email protected]>
Co-authored-by: Sergey Epifanov <[email protected]>
Co-authored-by: sdesrozis <[email protected]>
Co-authored-by: zhxxn <[email protected]>
Co-authored-by: François COKELAER <[email protected]>
Co-authored-by: thescripted <[email protected]>
Co-authored-by: vfdev-5 <[email protected]>
Co-authored-by: Rostyslav Zatserkovnyi <[email protected]>
Co-authored-by: Afzal <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hacktoberfest-accepted For accepted PRs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve or New time profiling utility
3 participants