Skip to content

Merge from upstream #90

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 62 commits into from
Aug 2, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
56d1a82
Add shape inference when converting from onnx to caffe2 (#10037)
houseroad Jul 31, 2018
2422801
fix _pointwise_loss for target gradients (#10018)
Jul 31, 2018
ee17ed6
Add missing dependencies (#10086)
houseroad Jul 31, 2018
58fd6e1
Also add ATen/core tests to oss CI (#10029)
smessmer Jul 31, 2018
1f13453
Slightly relax the constraints on argument and return types to script…
apaszke Jul 31, 2018
d217856
Remove some unnecessary includes. (#10085)
ezyang Jul 31, 2018
e04f8bb
Add virtual dtor for ideep context (#10059)
Jul 31, 2018
ba5d33b
Re-Enable ATen in C2 in integration builds to test ONNX ATen conversions
bddppq Jul 31, 2018
34c7c56
Re-enable empty n-dimensional empty tensor and fix parallel CPU on em…
gchanan Jul 31, 2018
bf744be
Parse and register schema declarations lazily (#9801)
zdevito Aug 1, 2018
ceb0f14
Fix SpatialBN Fusion (#10044)
bwasti Aug 1, 2018
c54d71b
Upgrade old transform passes to newer APIs (#10046)
bwasti Aug 1, 2018
9c0f65f
Remove While op stuff (#10102)
bwasti Aug 1, 2018
799c947
add .gitattributes for EOL conversion. (#9813)
shkit Aug 1, 2018
f2412fb
Allow multiple ops.def and clean up code gen in general
bwasti Aug 1, 2018
aae3732
fixed a newly introduced regression in softmax (#10066)
Aug 1, 2018
294c065
Changed serialization mechanism of LambdaLR scheduler (#9927)
0phoff Aug 1, 2018
7d2bda7
Move DDP broadcast coalesced to C++ (#9729)
goldsborough Aug 1, 2018
fcd567e
Enable Optimization on mobile by default
bwasti Aug 1, 2018
ec807f2
Bail out if netdef has disable_nomnigraph argument
bwasti Aug 1, 2018
3d24704
Force sync device when ops are sampled for observation
Aug 1, 2018
5bd43a7
Refactor Seq2SeqModelCaffe2EnsembleDecoder (#10035)
pritamdamania Aug 1, 2018
6f6a1f2
fix test_load_error_msg failure (Network is unreachable) (#10021)
weiyangfb Aug 1, 2018
6fc75ea
Add CELU activation to pytorch (#8551)
zasdfgbnm Aug 1, 2018
43b1512
Move grid sampler to ATen (#9961)
ssnl Aug 1, 2018
b503109
Guard sizes/strides in THCUNN for scalars.
gchanan Aug 1, 2018
fa6b28b
Move ArrayRef, Backtrace, Error, SmallVector, optional to ATen/core; …
ezyang Aug 1, 2018
2f848ec
Use new PyTorch API to make code simpler
zuoxingdong Aug 1, 2018
ee964c5
NegativeBinomial distribution (#9345)
kashif Aug 1, 2018
a2a7b0c
Initial documentation for building libtorch (#10087)
anderspapitto Aug 1, 2018
f1964c4
Update eigen submodule to fix BUILD_ATEN issue (#10095)
mingzhe09088 Aug 1, 2018
87d57dc
Simplified Operator (#10080)
goldsborough Aug 1, 2018
4070005
Move C++17.h to ATen/core (#10107)
smessmer Aug 1, 2018
f126687
Add a dump() method to IR Node's. (#10106)
Aug 1, 2018
e8f2731
fix a couple problems with libtorch cmake file (#10091)
anderspapitto Aug 1, 2018
5a44be5
Minor nit in comment in CMakeLists.txt
ezyang Aug 1, 2018
f908b2b
Use google protobuf in pytorch onnx import/export
Aug 1, 2018
2d6738e
Fix lint in ATen/core (but not ArrayRef)
ezyang Aug 1, 2018
59af5b9
Move UniqueVoidPtr to ATen/core and apply lint
ezyang Aug 1, 2018
2d56b5c
Prepare THC for first class scalars (0-dimensional tensors).
gchanan Aug 1, 2018
fb24c52
Prepare TH for first class scalars (0-dimensional tensors).
gchanan Aug 1, 2018
1b1c47d
Update onnx to onnx/onnx@32ac71b (#10126)
onnxbot Aug 1, 2018
ad6d622
Add torch.compiled_with_cxx11_abi(). (#10071)
zou3519 Aug 1, 2018
e2846c3
Improve ArrayRef (#9610)
smessmer Aug 1, 2018
080ae5e
Remove implicit ArrayRef -> vector conversion (#9740)
smessmer Aug 1, 2018
edb9038
Lint ArrayRef.h (#10129)
smessmer Aug 1, 2018
1d427fd
Delete type_ field from TensorImpl, replaced with backend_/scalar_typ…
ezyang Aug 1, 2018
1f6888b
Allow mobile exporter to export string arrays (#10017)
pushkartripathi Aug 1, 2018
4ed5b92
#8518 Support for empty tuples (#10027)
jramseyer Aug 1, 2018
59c355c
Move halfbits2float and float2halfbits conversions to ATen. (#10134)
ezyang Aug 2, 2018
806854a
Pin AMD gpu id in Caffe2 CI (#10144)
bddppq Aug 2, 2018
24bb8ce
Move ATen/Half to ATen/core, and apply lint (#10137)
ezyang Aug 2, 2018
a44d9d6
Fix tensor check logic in logging (#10138)
Aug 2, 2018
191482f
Distinguish TupleLiteral from ListLiteral (#10128)
suo Aug 2, 2018
6b338c8
Implement torch.broadcast_tensors (#10075)
zou3519 Aug 2, 2018
8cc7d33
Renumber typeid.h so that the number lines up with ScalarType (#10139)
ezyang Aug 2, 2018
5699250
Move IdWrapper to ATen/core (#10152)
ezyang Aug 2, 2018
8a25acb
Use angle brackets instead of quotes for includes.
ezyang Aug 2, 2018
57061d6
Auto-batching IR transformation for control flow (#9392)
ChunliF Aug 2, 2018
acbc274
fix bug in 3d group convolution (#9860)
stephenyan1231 Aug 2, 2018
4a5cd4f
nomnigraph - new utility for graph transformation (#10081)
duc0 Aug 2, 2018
e220141
Merge remote-tracking branch 'upstream/master'
iotamudelta Aug 2, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*.bat text eol=crlf
2 changes: 1 addition & 1 deletion .jenkins/caffe2/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ CMAKE_ARGS+=("-DUSE_OBSERVERS=ON")
CMAKE_ARGS+=("-DUSE_ZSTD=ON")
CMAKE_ARGS+=("-DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX}")

if [[ $BUILD_ENVIRONMENT == *-aten-* ]]; then
if [[ $BUILD_ENVIRONMENT == *-aten-* || -n "$INTEGRATED" ]]; then
if [[ CMAKE_ARGS != *USE_ATEN* ]] && [[ CMAKE_ARGS != *BUILD_ATEN* ]]; then
CMAKE_ARGS+=("-DBUILD_ATEN=ON")
fi
Expand Down
4 changes: 4 additions & 0 deletions .jenkins/caffe2/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,10 @@ if [[ $BUILD_ENVIRONMENT == *-rocm* ]]; then
# Our cuda top_k op has some asm code, the hipified version doesn't
# compile yet, so we don't have top_k operator for now
rocm_ignore_test+=("--ignore $CAFFE2_PYPATH/python/operator_test/top_k_test.py")

# Our AMD CI boxes have 4 gpus on each
# Remove this once we have added multi-gpu support
export HIP_VISIBLE_DEVICES=$(($BUILD_NUMBER % 4))
fi

# Python tests
Expand Down
4 changes: 3 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -214,9 +214,10 @@ if(NOT MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-strict-overflow")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-strict-aliasing")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=deprecated-declarations")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-stringop-overflow")
# These flags are not available in GCC-4.8.5. Set only when using clang.
# Compared against https://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Option-Summary.html
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-invalid-partial-specialization")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-typedef-redefinition")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-unknown-warning-option")
Expand All @@ -226,6 +227,7 @@ if(NOT MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-c++14-extensions")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-constexpr-not-const")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-missing-braces")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Qunused-arguments")
endif()
if ((APPLE AND (NOT ("${CLANG_VERSION_STRING}" VERSION_LESS "9.0")))
OR (CMAKE_COMPILER_IS_GNUCXX
Expand Down
1 change: 1 addition & 0 deletions aten/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -146,4 +146,5 @@ if (CAFFE2_CMAKE_BUILDING_WITH_MAIN_REPO)
set(ATen_THIRD_PARTY_INCLUDE ${ATen_THIRD_PARTY_INCLUDE} PARENT_SCOPE)
set(ATen_CPU_DEPENDENCY_LIBS ${ATen_CPU_DEPENDENCY_LIBS} PARENT_SCOPE)
set(ATen_CUDA_DEPENDENCY_LIBS ${ATen_CUDA_DEPENDENCY_LIBS} PARENT_SCOPE)
set(ATen_CORE_TEST_SRCS ${ATen_CORE_TEST_SRCS} PARENT_SCOPE)
endif()
2 changes: 1 addition & 1 deletion aten/src/ATen/Allocator.h
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
#include <ATen/Error.h>
#include <ATen/Retainable.h>
#include <ATen/Device.h>
#include <ATen/detail/UniqueVoidPtr.h>
#include <ATen/core/UniqueVoidPtr.h>

namespace at {

Expand Down
1 change: 1 addition & 0 deletions aten/src/ATen/ArrayRef.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
#include <ATen/ArrayRef.h>
192 changes: 1 addition & 191 deletions aten/src/ATen/ArrayRef.h
Original file line number Diff line number Diff line change
@@ -1,192 +1,2 @@
//===--- ArrayRef.h - Array Reference Wrapper -------------------*- C++ -*-===//
//
// The LLVM Compiler Infrastructure
//
// This file is distributed under the University of Illinois Open Source
// License. See LICENSE.TXT for details.
//
//===----------------------------------------------------------------------===//

// ATen: modified from llvm::ArrayRef.
// removed llvm-specific functionality
// removed some implicit const -> non-const conversions that rely on
// complicated std::enable_if meta-programming
// removed a bunch of slice variants for simplicity...

#pragma once

#include <ATen/Error.h>
#include <ATen/SmallVector.h>

#include <array>
#include <iterator>
#include <vector>

namespace at {
/// ArrayRef - Represent a constant reference to an array (0 or more elements
/// consecutively in memory), i.e. a start pointer and a length. It allows
/// various APIs to take consecutive elements easily and conveniently.
///
/// This class does not own the underlying data, it is expected to be used in
/// situations where the data resides in some other buffer, whose lifetime
/// extends past that of the ArrayRef. For this reason, it is not in general
/// safe to store an ArrayRef.
///
/// This is intended to be trivially copyable, so it should be passed by
/// value.
template<typename T>
class ArrayRef {
public:
typedef const T *iterator;
typedef const T *const_iterator;
typedef size_t size_type;

typedef std::reverse_iterator<iterator> reverse_iterator;

private:
/// The start of the array, in an external buffer.
const T *Data;

/// The number of elements.
size_type Length;

public:
/// @name Constructors
/// @{

/// Construct an empty ArrayRef.
/*implicit*/ ArrayRef() : Data(nullptr), Length(0) {}

/// Construct an ArrayRef from a single element.
/*implicit*/ ArrayRef(const T &OneElt)
: Data(&OneElt), Length(1) {}

/// Construct an ArrayRef from a pointer and length.
/*implicit*/ ArrayRef(const T *data, size_t length)
: Data(data), Length(length) {}

/// Construct an ArrayRef from a range.
ArrayRef(const T *begin, const T *end)
: Data(begin), Length(end - begin) {}

/// Construct an ArrayRef from a SmallVector. This is templated in order to
/// avoid instantiating SmallVectorTemplateCommon<T> whenever we
/// copy-construct an ArrayRef.
template<typename U>
/*implicit*/ ArrayRef(const SmallVectorTemplateCommon<T, U> &Vec)
: Data(Vec.data()), Length(Vec.size()) {
}

/// Construct an ArrayRef from a std::vector.
template<typename A>
/*implicit*/ ArrayRef(const std::vector<T, A> &Vec)
: Data(Vec.data()), Length(Vec.size()) {}

/// Construct an ArrayRef from a std::array
template <size_t N>
/*implicit*/ constexpr ArrayRef(const std::array<T, N> &Arr)
: Data(Arr.data()), Length(N) {}

/// Construct an ArrayRef from a C array.
template <size_t N>
/*implicit*/ constexpr ArrayRef(const T (&Arr)[N]) : Data(Arr), Length(N) {}

/// Construct an ArrayRef from a std::initializer_list.
/*implicit*/ ArrayRef(const std::initializer_list<T> &Vec)
: Data(Vec.begin() == Vec.end() ? (T*)nullptr : Vec.begin()),
Length(Vec.size()) {}

/// @}
/// @name Simple Operations
/// @{

const_iterator begin() const { return Data; }
const_iterator end() const { return Data + Length; }

reverse_iterator rbegin() const { return reverse_iterator(end()); }
reverse_iterator rend() const { return reverse_iterator(begin()); }

/// empty - Check if the array is empty.
bool empty() const { return Length == 0; }

const T *data() const { return Data; }

/// size - Get the array size.
size_t size() const { return Length; }

/// front - Get the first element.
const T &front() const {
AT_CHECK(!empty(), "ArrayRef: attempted to access front() of empty list");
return Data[0];
}

/// back - Get the last element.
const T &back() const {
AT_CHECK(!empty(), "ArrayRef: attempted to access back() of empty list");
return Data[Length-1];
}

/// equals - Check for element-wise equality.
bool equals(ArrayRef RHS) const {
if (Length != RHS.Length)
return false;
return std::equal(begin(), end(), RHS.begin());
}

/// slice(n, m) - Chop off the first N elements of the array, and keep M
/// elements in the array.
ArrayRef<T> slice(size_t N, size_t M) const {
AT_CHECK(N+M <= size(), "ArrayRef: invalid slice, ", N, " + ", M, " is not <= ", size());
return ArrayRef<T>(data()+N, M);
}

/// slice(n) - Chop off the first N elements of the array.
ArrayRef<T> slice(size_t N) const { return slice(N, size() - N); }

/// @}
/// @name Operator Overloads
/// @{
const T &operator[](size_t Index) const {
return Data[Index];
}

/// Vector compatibility
const T &at(size_t Index) const {
AT_CHECK(Index < Length, "ArrayRef: invalid index ", Index, " for length ", Length);
return Data[Index];
}

/// Disallow accidental assignment from a temporary.
///
/// The declaration here is extra complicated so that "arrayRef = {}"
/// continues to select the move assignment operator.
template <typename U>
typename std::enable_if<std::is_same<U, T>::value, ArrayRef<T>>::type &
operator=(U &&Temporary) = delete;

/// Disallow accidental assignment from a temporary.
///
/// The declaration here is extra complicated so that "arrayRef = {}"
/// continues to select the move assignment operator.
template <typename U>
typename std::enable_if<std::is_same<U, T>::value, ArrayRef<T>>::type &
operator=(std::initializer_list<U>) = delete;

/// @}
/// @name Expensive Operations
/// @{
std::vector<T> vec() const {
return std::vector<T>(Data, Data+Length);
}

/// @}
/// @name Conversion operators
/// @{
operator std::vector<T>() const {
return std::vector<T>(Data, Data+Length);
}

/// @}
};

} // end namespace at
#include <ATen/core/ArrayRef.h>
28 changes: 1 addition & 27 deletions aten/src/ATen/Backtrace.h
Original file line number Diff line number Diff line change
@@ -1,28 +1,2 @@
#pragma once

#include <cstddef>
#include <string>
#include <typeinfo>

#include <ATen/ATenGeneral.h>

namespace at {
/// Utility to demangle a C++ symbol name.
AT_API std::string demangle(const char* name);

/// Returns the printable name of the type.
template <typename T>
inline const char* demangle_type() {
#ifdef __GXX_RTTI
static const std::string name = demangle(typeid(T).name());
return name.c_str();
#else // __GXX_RTTI
return "(RTTI disabled, cannot show name)";
#endif // __GXX_RTTI
}

AT_API std::string get_backtrace(
size_t frames_to_skip = 0,
size_t maximum_number_of_frames = 64,
bool skip_python_frames = true);
} // namespace at
#include <ATen/core/Backtrace.h>
1 change: 1 addition & 0 deletions aten/src/ATen/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -445,6 +445,7 @@ if (NOT CAFFE2_CMAKE_BUILDING_WITH_MAIN_REPO)
endif()

# Pass source, includes, and libs to parent
set(ATen_CORE_SRCS ${ATen_CORE_SRCS} PARENT_SCOPE)
set(ATen_CPU_SRCS ${ATen_CPU_SRCS} PARENT_SCOPE)
set(ATen_CUDA_SRCS ${ATen_CUDA_SRCS} PARENT_SCOPE)
set(ATen_CPU_TEST_SRCS ${ATen_CPU_TEST_SRCS} PARENT_SCOPE)
Expand Down
4 changes: 2 additions & 2 deletions aten/src/ATen/CPUApplyUtils.h
Original file line number Diff line number Diff line change
Expand Up @@ -109,8 +109,8 @@ struct strided_tensor_iter {
: data_(tensor.data<T>()),
dim_(tensor.ndimension()),
counter_(dim_, 0),
sizes_(tensor.sizes()),
strides_(tensor.strides()) {
sizes_(tensor.sizes().vec()),
strides_(tensor.strides().vec()) {
_setup_arrays(tensor, this);
}
};
Expand Down
5 changes: 4 additions & 1 deletion aten/src/ATen/Context.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,11 @@ Context::Context()
Type::registerCPU(this);
}

// NB: Ensure that globalContext is initialized before we load
// variable hooks, otherwise we will deadlock. Regardless, the
// deadlock is bad, and being tracked at https://github.com/pytorch/pytorch/issues/9784
static Context globalContext_;
Context & globalContext() {
static Context globalContext_;
return globalContext_;
}

Expand Down
Loading