Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast Function Approximations lowering. #8566

Open
wants to merge 57 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
94f57e2
Fast vectorizable atan and atan2 functions.
mcourteaux Aug 10, 2024
de2c334
Default to not using fast atan versions if on CUDA.
mcourteaux Aug 10, 2024
54aad39
Finished fast atan/atan2 functions and tests.
mcourteaux Aug 10, 2024
61f17bb
Correct attribution.
mcourteaux Aug 10, 2024
b5aa8b9
Clang-format
mcourteaux Aug 10, 2024
59bda4a
Weird WebAssembly limits...
mcourteaux Aug 11, 2024
81a4a47
Small improvements to the optimization script.
mcourteaux Aug 11, 2024
fc872f8
Polynomial optimization for log, exp, sin, cos with correct ranges.
mcourteaux Aug 11, 2024
4cdfb9e
Improve fast atan performance tests for GPU.
mcourteaux Aug 12, 2024
c35e64f
Bugfix fast_atan approximation. Fix correctness test to exceed the ra…
mcourteaux Aug 12, 2024
2a5e88a
Cleanup
mcourteaux Aug 12, 2024
e17b0af
Enum class instead of enum for ApproximationPrecision.
mcourteaux Aug 12, 2024
28de29b
Weird Metal limits. There should be a better way...
mcourteaux Aug 12, 2024
cc434f6
Skip test for WebGPU.
mcourteaux Aug 12, 2024
77d162b
Fast atan/atan2 polynomials reoptimized. New optimization strategy: ULP.
mcourteaux Aug 13, 2024
bb9ddca
Feedback Steven.
mcourteaux Aug 13, 2024
342babe
More comments and test mantissa error.
mcourteaux Aug 14, 2024
8a100fc
Do not error when testing arctan performance on Metal / WebGPU.
mcourteaux Aug 14, 2024
01631fb
Rework precision specification. Generalize towards using this for oth…
mcourteaux Nov 11, 2024
bf07407
Clang-format.
mcourteaux Nov 11, 2024
3cb0ecb
Fix makefile and clang-tidy.
mcourteaux Nov 11, 2024
59bc83a
Fix incorrect approximation selection when required precision is not …
mcourteaux Nov 12, 2024
9ded043
Feedback from Steven.
mcourteaux Dec 3, 2024
ed6b71f
Implemented approximation tables for sin, cos, exp, log fast variants…
mcourteaux Feb 4, 2025
1416550
Clang-format.
mcourteaux Feb 4, 2025
a1dedbc
Move Polynomial Optimizer Python script to tools/ directory.
mcourteaux Feb 4, 2025
569bf69
Enable performance test for fast_atan and fast_atan2.
mcourteaux Feb 4, 2025
37f48fa
LLVM upper-limit 99 (CMake needs an upper limit).
mcourteaux Feb 4, 2025
33d518b
Add LLVM IR for PTX sin.approx, cos.approx, tanh.approx
mcourteaux Feb 4, 2025
0ef2c9c
Implemented tan. Improved polynomial optimizer performance for MULPE …
mcourteaux Feb 5, 2025
08f8bbd
Implemented tanh, tan. Many improvements to accuracy test and perform…
mcourteaux Feb 5, 2025
d6b3947
Clang-format.
mcourteaux Feb 5, 2025
dbe316e
WIP: Fiddle with strict_float behavior in CSE. Fix fast math precisio…
mcourteaux Feb 7, 2025
b1b23b5
Nuke MAE_MULPE. Separate optimized MULPE-corrected sin and cos.
mcourteaux Feb 8, 2025
3c30732
Clang-format
mcourteaux Feb 8, 2025
4e0d2c2
Some cleanup.
mcourteaux Feb 8, 2025
5d1dcc0
Fix sine.
mcourteaux Feb 8, 2025
3232680
Fix clang-tidy. Mark OpenCL exp() as fast.
mcourteaux Feb 8, 2025
abe25ab
Clang format is annoying me.
mcourteaux Feb 8, 2025
73e6e7b
Remove my experimental CSE step.
mcourteaux Feb 9, 2025
44b80f1
OpenCL performance of fast_exp forced poly is expected to be worse.
mcourteaux Feb 9, 2025
c29a24d
OpenCL fast functions selected for fast transcendentals.
mcourteaux Feb 9, 2025
7dd1f40
Lower fast intrinsics on metal to the fast:: namespace versions.
mcourteaux Feb 9, 2025
69b9990
Split tables for sin and cos, as metal has odd precision for sin. Add…
mcourteaux Feb 9, 2025
41d072c
Move range_reduce_log to a header. Drive-by fix listing libOpenCL.so.…
mcourteaux Feb 10, 2025
f0357dc
Fix API documentation. Improve measuring accuracy. Fix vector_math te…
mcourteaux Feb 10, 2025
7004161
Also vectorize on GPU to make sure we test that.
mcourteaux Feb 11, 2025
0de4dbc
Remove libOpenCL.so from search list in favor of libOpenCL.so.1
mcourteaux Feb 11, 2025
a637f8e
Add FastMathFunctions.cpp to Makefile
mcourteaux Feb 11, 2025
267ae49
Add support for derivatives for the fast_ intrinsics.
mcourteaux Feb 11, 2025
53a2263
Remove unused helper function.
mcourteaux Feb 11, 2025
70c6d8d
Add in a gracefactor for precision when the system does not support FMA.
mcourteaux Feb 11, 2025
5adec40
Clang Format.
mcourteaux Feb 11, 2025
02e78f1
Windows doesn't print thousand separaters with printf. :(
mcourteaux Feb 11, 2025
c82a188
Remove grace factor, and use safety factor of 5% when selecting a pol…
mcourteaux Feb 16, 2025
cd365db
Use 50% tighter constraints when no FMA is available to compensate fo…
mcourteaux Feb 17, 2025
3211d3a
Clang-format.
mcourteaux Feb 17, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -421,6 +421,7 @@ SOURCE_FILES = \
AlignLoads.cpp \
AllocationBoundsInference.cpp \
ApplySplit.cpp \
ApproximationTables.cpp \
Argument.cpp \
AssociativeOpsTable.cpp \
Associativity.cpp \
Expand Down Expand Up @@ -479,6 +480,7 @@ SOURCE_FILES = \
Expr.cpp \
ExtractTileOperations.cpp \
FastIntegerDivide.cpp \
FastMathFunctions.cpp \
FindCalls.cpp \
FindIntrinsics.cpp \
FlattenNestedRamps.cpp \
Expand Down
265 changes: 265 additions & 0 deletions src/ApproximationTables.cpp

Large diffs are not rendered by default.

32 changes: 32 additions & 0 deletions src/ApproximationTables.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
#ifndef HALIDE_APPROXIMATION_TABLES_H
#define HALIDE_APPROXIMATION_TABLES_H

#include <vector>

#include "IROperator.h"

namespace Halide {
namespace Internal {

struct Approximation {
ApproximationPrecision::OptimizationObjective objective;
struct Metrics {
double mse;
double mae;
double mulpe;
} metrics_f32, metrics_f64;
std::vector<double> coefficients;
};

const Approximation *best_atan_approximation(Halide::ApproximationPrecision precision, Type type);
const Approximation *best_sin_approximation(Halide::ApproximationPrecision precision, Type type);
const Approximation *best_cos_approximation(Halide::ApproximationPrecision precision, Type type);
const Approximation *best_tan_approximation(Halide::ApproximationPrecision precision, Type type);
const Approximation *best_log_approximation(Halide::ApproximationPrecision precision, Type type);
const Approximation *best_exp_approximation(Halide::ApproximationPrecision precision, Type type);
const Approximation *best_expm1_approximation(Halide::ApproximationPrecision precision, Type type);

} // namespace Internal
} // namespace Halide

#endif
6 changes: 4 additions & 2 deletions src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,7 @@ target_sources(
ExternFuncArgument.h
ExtractTileOperations.h
FastIntegerDivide.h
FastMathFunctions.h
FindCalls.h
FindIntrinsics.h
FlattenNestedRamps.h
Expand Down Expand Up @@ -220,8 +221,7 @@ target_sources(
WrapCalls.h
)

# The sources that go into libHalide. For the sake of IDE support, headers that
# exist in src/ but are not public should be included here.
# The sources that go into libHalide.
target_sources(
Halide
PRIVATE
Expand All @@ -233,6 +233,7 @@ target_sources(
AlignLoads.cpp
AllocationBoundsInference.cpp
ApplySplit.cpp
ApproximationTables.cpp
Argument.cpp
AssociativeOpsTable.cpp
Associativity.cpp
Expand Down Expand Up @@ -291,6 +292,7 @@ target_sources(
Expr.cpp
ExtractTileOperations.cpp
FastIntegerDivide.cpp
FastMathFunctions.cpp
FindCalls.cpp
FindIntrinsics.cpp
FlattenNestedRamps.cpp
Expand Down
6 changes: 6 additions & 0 deletions src/CSE.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,12 @@ bool should_extract(const Expr &e, bool lift_all) {
return false;
}

if (const Call *c = e.as<Call>()) {
if (c->type == type_of<ApproximationPrecision *>()) {
return false;
}
}

if (lift_all) {
return true;
}
Expand Down
7 changes: 7 additions & 0 deletions src/CodeGen_Metal_Dev.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -858,6 +858,13 @@ void CodeGen_Metal_Dev::init_module() {
<< "#define acosh_f16 acosh\n"
<< "#define tanh_f16 tanh\n"
<< "#define atanh_f16 atanh\n"
<< "#define fast_sin_f32 fast::sin \n"
<< "#define fast_cos_f32 fast::cos \n"
<< "#define fast_tan_f32 fast::tan \n"
<< "#define fast_exp_f32 fast::exp \n"
<< "#define fast_log_f32 fast::log \n"
<< "#define fast_pow_f32 fast::pow \n"
<< "#define fast_tanh_f32 fast::tanh \n"
<< "#define fast_inverse_sqrt_f16 rsqrt\n"
<< "constexpr half half_from_bits(unsigned short x) {return as_type<half>(x);}\n"
<< "constexpr half nan_f16() { return half_from_bits(32767); }\n"
Expand Down
6 changes: 6 additions & 0 deletions src/CodeGen_OpenCL_Dev.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1148,6 +1148,12 @@ void CodeGen_OpenCL_Dev::init_module() {
<< "#define acosh_f32 acosh \n"
<< "#define tanh_f32 tanh \n"
<< "#define atanh_f32 atanh \n"
<< "#define fast_sin_f32 native_sin \n"
<< "#define fast_cos_f32 native_cos \n"
<< "#define fast_tan_f32 native_tan \n"
<< "#define fast_exp_f32 native_exp \n"
<< "#define fast_log_f32 native_log \n"
<< "#define fast_pow_f32 native_powr \n"
<< "#define fast_inverse_f32 native_recip \n"
<< "#define fast_inverse_sqrt_f32 native_rsqrt \n";

Expand Down
4 changes: 2 additions & 2 deletions src/CodeGen_PTX_Dev.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -572,7 +572,7 @@ string CodeGen_PTX_Dev::mattrs() const {
return "+ptx70";
} else if (target.has_feature(Target::CUDACapability70) ||
target.has_feature(Target::CUDACapability75)) {
return "+ptx60";
return "+ptx70";
} else if (target.has_feature(Target::CUDACapability61)) {
return "+ptx50";
} else if (target.features_any_of({Target::CUDACapability32,
Expand Down Expand Up @@ -728,7 +728,7 @@ vector<char> CodeGen_PTX_Dev::compile_to_src() {
if (debug::debug_level() >= 2) {
dump();
}
debug(2) << "Done with CodeGen_PTX_Dev::compile_to_src";
debug(2) << "Done with CodeGen_PTX_Dev::compile_to_src\n";

debug(1) << "PTX kernel:\n"
<< outstr.c_str() << "\n";
Expand Down
206 changes: 106 additions & 100 deletions src/Derivative.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,20 @@ using FuncKey = Derivative::FuncKey;
namespace Internal {
namespace {

bool is_float_extern(const string &op_name,
const string &func_name) {
return op_name == (func_name + "_f16") ||
op_name == (func_name + "_f32") ||
op_name == (func_name + "_f64");
};
bool is_math_func(const Call *call,
const string &func_name,
Call::IntrinsicOp intrinsic_op = Call::IntrinsicOp::IntrinsicOpCount) {
if (call->is_extern()) {
const string &op_name = call->name;
return op_name == (func_name + "_f16") ||
op_name == (func_name + "_f32") ||
op_name == (func_name + "_f64");
} else if (call->is_intrinsic() && intrinsic_op != Call::IntrinsicOpCount) {
return call->is_intrinsic(intrinsic_op);
} else {
return false;
}
}

/** Compute derivatives through reverse accumulation
*/
Expand Down Expand Up @@ -1058,101 +1066,99 @@ void ReverseAccumulationVisitor::visit(const Select *op) {
void ReverseAccumulationVisitor::visit(const Call *op) {
internal_assert(expr_adjoints.find(op) != expr_adjoints.end());
Expr adjoint = expr_adjoints[op];
if (op->is_extern()) {
// Math functions
if (is_float_extern(op->name, "exp")) {
// d/dx exp(x) = exp(x)
accumulate(op->args[0], adjoint * exp(op->args[0]));
} else if (is_float_extern(op->name, "log")) {
// d/dx log(x) = 1 / x
accumulate(op->args[0], adjoint / op->args[0]);
} else if (is_float_extern(op->name, "sin")) {
// d/dx sin(x) = cos(x)
accumulate(op->args[0], adjoint * cos(op->args[0]));
} else if (is_float_extern(op->name, "asin")) {
// d/dx asin(x) = 1 / sqrt(1 - x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], adjoint / sqrt(one - op->args[0] * op->args[0]));
} else if (is_float_extern(op->name, "cos")) {
// d/dx cos(x) = -sin(x)
accumulate(op->args[0], -adjoint * sin(op->args[0]));
} else if (is_float_extern(op->name, "acos")) {
// d/dx acos(x) = - 1 / sqrt(1 - x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], -adjoint / sqrt(one - op->args[0] * op->args[0]));
} else if (is_float_extern(op->name, "tan")) {
// d/dx tan(x) = 1 / cos(x)^2
Expr c = cos(op->args[0]);
accumulate(op->args[0], adjoint / (c * c));
} else if (is_float_extern(op->name, "atan")) {
// d/dx atan(x) = 1 / (1 + x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], adjoint / (one + op->args[0] * op->args[0]));
} else if (is_float_extern(op->name, "atan2")) {
Expr x2y2 = op->args[0] * op->args[0] + op->args[1] * op->args[1];
// d/dy atan2(y, x) = x / (x^2 + y^2)
accumulate(op->args[0], adjoint * (op->args[1] / x2y2));
// d/dx atan2(y, x) = -y / (x^2 + y^2)
accumulate(op->args[1], adjoint * (-op->args[0] / x2y2));
} else if (is_float_extern(op->name, "sinh")) {
// d/dx sinh(x) = cosh(x)
accumulate(op->args[0], adjoint * cosh(op->args[0]));
} else if (is_float_extern(op->name, "asinh")) {
// d/dx asin(x) = 1 / sqrt(1 + x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], adjoint / sqrt(one + op->args[0] * op->args[0]));
} else if (is_float_extern(op->name, "cosh")) {
// d/dx cosh(x) = sinh(x)
accumulate(op->args[0], adjoint * sinh(op->args[0]));
} else if (is_float_extern(op->name, "acosh")) {
// d/dx acosh(x) = 1 / (sqrt(x - 1) sqrt(x + 1)))
Expr one = make_one(op->type);
accumulate(op->args[0],
adjoint / (sqrt(op->args[0] - one) * sqrt(op->args[0] + one)));
} else if (is_float_extern(op->name, "tanh")) {
// d/dx tanh(x) = 1 / cosh(x)^2
Expr c = cosh(op->args[0]);
accumulate(op->args[0], adjoint / (c * c));
} else if (is_float_extern(op->name, "atanh")) {
// d/dx atanh(x) = 1 / (1 - x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], adjoint / (one - op->args[0] * op->args[0]));
} else if (is_float_extern(op->name, "ceil")) {
// TODO: d/dx = dirac(n) for n in Z ...
accumulate(op->args[0], make_zero(op->type));
} else if (is_float_extern(op->name, "floor")) {
// TODO: d/dx = dirac(n) for n in Z ...
accumulate(op->args[0], make_zero(op->type));
} else if (is_float_extern(op->name, "round")) {
accumulate(op->args[0], make_zero(op->type));
} else if (is_float_extern(op->name, "trunc")) {
accumulate(op->args[0], make_zero(op->type));
} else if (is_float_extern(op->name, "sqrt")) {
Expr half = make_const(op->type, 0.5);
accumulate(op->args[0], adjoint * (half / sqrt(op->args[0])));
} else if (is_float_extern(op->name, "pow")) {
Expr one = make_one(op->type);
accumulate(op->args[0],
adjoint * op->args[1] * pow(op->args[0], op->args[1] - one));
accumulate(op->args[1],
adjoint * pow(op->args[0], op->args[1]) * log(op->args[0]));
} else if (is_float_extern(op->name, "fast_inverse")) {
// d/dx 1/x = -1/x^2
Expr inv_x = fast_inverse(op->args[0]);
accumulate(op->args[0], -adjoint * inv_x * inv_x);
} else if (is_float_extern(op->name, "fast_inverse_sqrt")) {
// d/dx x^(-0.5) = -0.5*x^(-1.5)
Expr inv_sqrt_x = fast_inverse_sqrt(op->args[0]);
Expr neg_half = make_const(op->type, -0.5);
accumulate(op->args[0],
neg_half * adjoint * inv_sqrt_x * inv_sqrt_x * inv_sqrt_x);
} else if (op->name == "halide_print") {
for (const auto &arg : op->args) {
accumulate(arg, make_zero(op->type));
}
} else {
internal_error << "The derivative of " << op->name << " is not implemented.";
// Math functions (Can be both intrinsic and extern).
if (is_math_func(op, "exp", Call::fast_exp)) {
// d/dx exp(x) = exp(x)
accumulate(op->args[0], adjoint * exp(op->args[0]));
} else if (is_math_func(op, "log", Call::fast_log)) {
// d/dx log(x) = 1 / x
accumulate(op->args[0], adjoint / op->args[0]);
} else if (is_math_func(op, "sin", Call::fast_sin)) {
// d/dx sin(x) = cos(x)
accumulate(op->args[0], adjoint * cos(op->args[0]));
} else if (is_math_func(op, "asin")) {
// d/dx asin(x) = 1 / sqrt(1 - x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], adjoint / sqrt(one - op->args[0] * op->args[0]));
} else if (is_math_func(op, "cos", Call::fast_cos)) {
// d/dx cos(x) = -sin(x)
accumulate(op->args[0], -adjoint * sin(op->args[0]));
} else if (is_math_func(op, "acos")) {
// d/dx acos(x) = - 1 / sqrt(1 - x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], -adjoint / sqrt(one - op->args[0] * op->args[0]));
} else if (is_math_func(op, "tan", Call::fast_tan)) {
// d/dx tan(x) = 1 / cos(x)^2
Expr c = cos(op->args[0]);
accumulate(op->args[0], adjoint / (c * c));
} else if (is_math_func(op, "atan", Call::fast_atan)) {
// d/dx atan(x) = 1 / (1 + x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], adjoint / (one + op->args[0] * op->args[0]));
} else if (is_math_func(op, "atan2", Call::fast_atan2)) {
Expr x2y2 = op->args[0] * op->args[0] + op->args[1] * op->args[1];
// d/dy atan2(y, x) = x / (x^2 + y^2)
accumulate(op->args[0], adjoint * (op->args[1] / x2y2));
// d/dx atan2(y, x) = -y / (x^2 + y^2)
accumulate(op->args[1], adjoint * (-op->args[0] / x2y2));
} else if (is_math_func(op, "sinh")) {
// d/dx sinh(x) = cosh(x)
accumulate(op->args[0], adjoint * cosh(op->args[0]));
} else if (is_math_func(op, "asinh")) {
// d/dx asin(x) = 1 / sqrt(1 + x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], adjoint / sqrt(one + op->args[0] * op->args[0]));
} else if (is_math_func(op, "cosh")) {
// d/dx cosh(x) = sinh(x)
accumulate(op->args[0], adjoint * sinh(op->args[0]));
} else if (is_math_func(op, "acosh")) {
// d/dx acosh(x) = 1 / (sqrt(x - 1) sqrt(x + 1)))
Expr one = make_one(op->type);
accumulate(op->args[0],
adjoint / (sqrt(op->args[0] - one) * sqrt(op->args[0] + one)));
} else if (is_math_func(op, "tanh", Call::fast_tanh)) {
// d/dx tanh(x) = 1 / cosh(x)^2
Expr c = cosh(op->args[0]);
accumulate(op->args[0], adjoint / (c * c));
} else if (is_math_func(op, "atanh")) {
// d/dx atanh(x) = 1 / (1 - x^2)
Expr one = make_one(op->type);
accumulate(op->args[0], adjoint / (one - op->args[0] * op->args[0]));
} else if (is_math_func(op, "ceil")) {
// TODO: d/dx = dirac(n) for n in Z ...
accumulate(op->args[0], make_zero(op->type));
} else if (is_math_func(op, "floor")) {
// TODO: d/dx = dirac(n) for n in Z ...
accumulate(op->args[0], make_zero(op->type));
} else if (is_math_func(op, "round")) {
accumulate(op->args[0], make_zero(op->type));
} else if (is_math_func(op, "trunc")) {
accumulate(op->args[0], make_zero(op->type));
} else if (is_math_func(op, "sqrt")) {
Expr half = make_const(op->type, 0.5);
accumulate(op->args[0], adjoint * (half / sqrt(op->args[0])));
} else if (is_math_func(op, "pow", Call::fast_pow)) {
Expr one = make_one(op->type);
accumulate(op->args[0],
adjoint * op->args[1] * pow(op->args[0], op->args[1] - one));
accumulate(op->args[1],
adjoint * pow(op->args[0], op->args[1]) * log(op->args[0]));
} else if (is_math_func(op, "fast_inverse")) {
// d/dx 1/x = -1/x^2
Expr inv_x = fast_inverse(op->args[0]);
accumulate(op->args[0], -adjoint * inv_x * inv_x);
} else if (is_math_func(op, "fast_inverse_sqrt")) {
// d/dx x^(-0.5) = -0.5*x^(-1.5)
Expr inv_sqrt_x = fast_inverse_sqrt(op->args[0]);
Expr neg_half = make_const(op->type, -0.5);
accumulate(op->args[0],
neg_half * adjoint * inv_sqrt_x * inv_sqrt_x * inv_sqrt_x);
} else if (op->is_extern() && op->name == "halide_print") {
for (const auto &arg : op->args) {
accumulate(arg, make_zero(op->type));
}
} else if (op->is_extern()) {
internal_error << "The derivative of " << op->name << " is not implemented.";
} else if (op->is_intrinsic()) {
if (op->is_intrinsic(Call::abs)) {
accumulate(op->args[0],
Expand Down
2 changes: 1 addition & 1 deletion src/Error.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ namespace Internal {

void unhandled_exception_handler() {
// Note that we use __cpp_exceptions (rather than HALIDE_WITH_EXCEPTIONS)
// to maximize the change of dealing with uncaught exceptions in weird
// to maximize the chance of dealing with uncaught exceptions in weird
// build situations (i.e., exceptions enabled via C++ but HALIDE_WITH_EXCEPTIONS
// is somehow not set).
#ifdef __cpp_exceptions
Expand Down
Loading
Loading