Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conflict with decimal column type in generated batch #6835

Closed
yarmenteros opened this issue Oct 21, 2016 · 12 comments
Closed

Conflict with decimal column type in generated batch #6835

yarmenteros opened this issue Oct 21, 2016 · 12 comments
Labels
closed-out-of-scope This is not something that will be fixed/implemented and the issue is closed.

Comments

@yarmenteros
Copy link

The issue

I have problems with decimal types when they are saved in the database. The problem is caused by a conflict between the decimal type precision (decimal(18,2)) used in temporary table into the batch and the decimal type precision (decimal(3,3)) used to define the corresponding parameter into the same batch.

In my case the decimal value is 0.589, and in the database it's saved as a rounded value: 0.590. My column type onto the table is decimal(9,3).

My question is: Why if was well determined the the type and precision to be used as parameter, at the same time it can't use the same type and precision to the same column into temporary table used in batch?

The Batch:

Affected column: Cost
Column and Type defined in temporal table: [Cost] decimal(18, 2)
Column Type and value defined as parameter: @P71 decimal(3,3)
The value before run the batch: @P71=0.589
The saved value: 0.590

exec sp_executesql 
N'SET NOCOUNT ON;
DECLARE @toInsert1 
       TABLE ([CompanyGroupShipKey] uniqueidentifier, 
                     [Cost] decimal(18, 2), 
                     [CreationDate] datetime2, 
                     [CreationUser] nvarchar(max), 
                     [InventoryKey] uniqueidentifier, 
                     [LastModifiedDate] datetime2, 
                     [LastModifiedUser] nvarchar(max), 
                     [LotId] int, 
                     [Notes] nvarchar(max), 
                     [OrderDetailKey] uniqueidentifier, 
                     [ProductKey] uniqueidentifier, 
                     [ProductUnitMeasureKey] uniqueidentifier, 
                     [Quantity] decimal(18, 2), 
                     [ReceivingHeadKey] uniqueidentifier, 
                     [RowNumber] smallint, 
                     [_Position] [int]);
INSERT INTO @toInsert1
VALUES (@p55, @p56, @p57, @p58, @p59, @p60, @p61, @p62, @p63, @p64, @p65, @p66, @p67, @p68, @p69, 0),
(@p70, @p71, @p72, @p73, @p74, @p75, @p76, @p77, @p78, @p79, @p80, @p81, @p82, @p83, @p84, 1);

DECLARE @inserted1 
       TABLE ([ReceivingDetailKey] uniqueidentifier, 
              [CompanyGroupShipKey] uniqueidentifier, 
              [Cost] decimal(18, 2), 
              [CreationDate] datetime2, 
              [CreationUser] nvarchar(max), 
              [InventoryKey] uniqueidentifier, 
              [LastModifiedDate] datetime2, 
              [LastModifiedUser] nvarchar(max), 
              [LotId] int, 
              [Notes] nvarchar(max), 
              [OrderDetailKey] uniqueidentifier, 
              [ProductKey] uniqueidentifier, 
              [ProductUnitMeasureKey] uniqueidentifier, 
              [Quantity] decimal(18, 2), 
              [ReceivingHeadKey] uniqueidentifier, 
              [RowNumber] smallint, 
              [RowVersion] binary(8), 
              [_Position] [int]);
MERGE [ICS].[ReceivingDetail] USING @toInsert1 AS i ON 1=0
WHEN NOT MATCHED THEN
       INSERT ([CompanyGroupShipKey], [Cost], [CreationDate], [CreationUser], [InventoryKey], [LastModifiedDate], [LastModifiedUser], [LotId], [Notes], [OrderDetailKey], [ProductKey], [ProductUnitMeasureKey], [Quantity], [ReceivingHeadKey], [RowNumber])
       VALUES (i.[CompanyGroupShipKey], i.[Cost], i.[CreationDate], i.[CreationUser], i.[InventoryKey], i.[LastModifiedDate], i.[LastModifiedUser], i.[LotId], i.[Notes], i.[OrderDetailKey], i.[ProductKey], i.[ProductUnitMeasureKey], i.[Quantity], i.[ReceivingHeadKey], i.[RowNumber])
OUTPUT 
       INSERTED.[ReceivingDetailKey], INSERTED.[CompanyGroupShipKey], INSERTED.[Cost], INSERTED.[CreationDate], INSERTED.[CreationUser], INSERTED.[InventoryKey], INSERTED.[LastModifiedDate], INSERTED.[LastModifiedUser], INSERTED.[LotId], INSERTED.[Notes], INSERTED.[OrderDetailKey], INSERTED.[ProductKey], INSERTED.[ProductUnitMeasureKey], INSERTED.[Quantity], INSERTED.[ReceivingHeadKey], INSERTED.[RowNumber], INSERTED.[RowVersion], i._Position
INTO @inserted1;

SELECT [ReceivingDetailKey], [RowVersion] FROM @inserted1
ORDER BY _Position;
'
,
N'@p55 uniqueidentifier,
       @p56 decimal(4,3),
       @p57 datetime2(7),
       @p58 nvarchar(4000),
       @p59 nvarchar(4000),
       @p60 datetime2(7),
       @p61 nvarchar(4000),
       @p62 int,
       @p63 nvarchar(4000),
       @p64 uniqueidentifier,
       @p65 uniqueidentifier,
       @p66 uniqueidentifier,
       @p67 decimal(1,0),
       @p68 uniqueidentifier,
       @p69 smallint,
       @p70 uniqueidentifier,
       @p71 decimal(3,3),
       @p72 datetime2(7),
       @p73 nvarchar(4000),
       @p74 nvarchar(4000),
       @p75 datetime2(7),
       @p76 nvarchar(4000),
       @p77 int,
       @p78 nvarchar(4000),
       @p79 uniqueidentifier,
       @p80 uniqueidentifier,
       @p81 uniqueidentifier,
       @p82 decimal(1,0),
       @p83 uniqueidentifier,
       @p84 smallint'
,
       @p55='38E7DC3B-B92C-E611-80C3-9457A56B5217',
       @p56=1.250,
       @p57='2016-10-21 11:59:23.1022603',
       @p58=N'yarmenteros',
       @p59=NULL,
       @p60=NULL,
       @p61=NULL,
       @p62=0,
       @p63=NULL,
       @p64='01F0B11F-9F97-E611-80C7-9457A56B5217',
       @p65='D3BDC4C1-7755-4CA6-AD35-0BDBA72EDE6B',
       @p66='2BB6087F-3E5A-470A-A6E6-B70CF924F60B',
       @p67=0,
       @p68='70650155-A797-E611-80C7-9457A56B5217',
       @p69=10,
       @p70='38E7DC3B-B92C-E611-80C3-9457A56B5217',
       @p71=0.589,
       @p72='2016-10-21 11:59:23.1022603',
       @p73=N'yarmenteros',
       @p74=NULL,
       @p75=NULL,
       @p76=NULL,
       @p77=0,
       @p78=NULL,
       @p79='02F0B11F-9F97-E611-80C7-9457A56B5217',
       @p80='DADC25BD-3C29-4799-AC6C-1580752638D6',
       @p81='39FC3296-5A7F-4DC8-9ACE-BC0498CC0353',
       @p82=0,
       @p83='70650155-A797-E611-80C7-9457A56B5217',
       @p84=20

Workaround

To solve this issue I have configured explicitly into the DbContext the column type for affected properties in my code, like as:

entity.Property(e => e.Cost).HasColumnType("decimal(9,3)");
entity.Property(e => e.Quantity).HasColumnType("decimal(9,3)");

Thanks for your time

@divega
Copy link
Contributor

divega commented Oct 21, 2016

@yarmenteros Thanks for reporting. This may explain #6538.

@AndriySvyryd AndriySvyryd self-assigned this Oct 21, 2016
@divega
Copy link
Contributor

divega commented Oct 21, 2016

@yarmenteros we believe the unexpected behavior manifests only if the precision and scale configured in your model doesn't match the actual precision and scale of the column in the database. I.e. if you create the database using EF Core with the code first approach and you don't configure the precision and scale explicitly then the column will be created with type decimal(18,2) which is the default picked by EF Core.

In other words, the workaround you found is the right solution: if you tell EF Core what the actual precision and scale in the column is then updates should work regardless of how many rows you update.

The reason this doesn't manifest if you update a single row is that we produce completely different SQL in that case that doesn't rely on a table variable. The precision and scale are actually never set in the parameter, so we let the provider (SqlClient) in this case to decide what parameter facets to use based on the value passed.

@divega
Copy link
Contributor

divega commented Oct 21, 2016

Note for triage: There are a few things we could consider doing here:

  1. Do not change the runtime and just:
    • Make sure we document what the default precision and scale we use and the fact that users that work with existing database should set the actual precisions and scale used in their columns explicitily.
    • Fix bug Update scaffolding to use ScaffoldingTypeMapper #5410 in model scaffolding to make sure facets are preserved when the model is based on an existing database.
  2. Increase the default scale (and probably the precision as well) EF Core assumes so that truncation happens less often. This would have the disadvantage that databases created with previous versions of EF Core would have different precision and scale.
  3. Look into actually setting precision and scale of decimal parameters which would probably cause truncation to happen more consistently. We are intentionally not doing this today, but I am not sure we have ever looked at other possible ramifications of not doing it, like the possibility that this causes cache fragmentation similar to when we let SqlClient/SQL Server figure out the size of string parameters.
  4. Switch to simpler (and possibly slower) SQL for batching that doesn't rely on table variables.

@ajcvickers
Copy link
Contributor

With regard to 2, possibly the update pipeline could use increased precision/scale if no explicit mapping has been set without changing the default used for DDL. However, I'm not sure if this can be done cleanly.

@AndriySvyryd
Copy link
Member

@ajcvickers Yes, we already special case 'rowversion' there.

@ajcvickers
Copy link
Contributor

@AndriySvyryd Why is it necessary to special case rowversion?

@ajcvickers
Copy link
Contributor

@AndriySvyryd Never mind--I just looked at the code. :-)

@AndriySvyryd
Copy link
Member

Wrong button.
It's not supported in a temporary table.

@divega
Copy link
Contributor

divega commented Oct 21, 2016

With regard to 2, possibly the update pipeline could use increased precision/scale if no explicit mapping has been set without changing the default used for DDL. However, I'm not sure if this can be done cleanly.

I think that could help decrease the chances of truncation but I don't think it would work for all values (because precision has limits and a scale that is too high could limit the number of integer digits that can be represented) so we could end up needing to make it user-configurable. It seemed to me that just changing the default precision and scale on the property level (which is already user-configurable) was more compelling.

Anyway, I think point 3 could be interesting, especially because I am not sure if what we are doing now is fragmenting the query cache.

@ajcvickers ajcvickers removed this from the Backlog milestone Sep 7, 2017
@ajcvickers ajcvickers added punted-for-2.0 and removed closed-fixed The issue has been fixed and is/will be included in the release indicated by the issue milestone. labels Sep 7, 2017
@ajcvickers ajcvickers modified the milestones: 2.1.0-preview1, 2.1.0 Jan 17, 2018
@3axap-4
Copy link

3axap-4 commented Feb 27, 2018

Hello I have an issue when work with always encrypted columns (type is decimal (6,2)), when I try to add/update, for Db manipulation I use DbContext, entry in my Db I get follow error:

Operand type clash: decimal(1,0) encrypted with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = '****', column_encryption_key_database_name = '****') is incompatible with decimal(6,2) encrypted with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = '*****', column_encryption_key_database_name = '****')

for work I use follow model

public class Model
{
    [Column(TypeName = "nvarchar(MAX)")]
    public string Name { get; set; }

    [Column(TypeName = "nvarchar(MAX)")]
    public string Description { get; set; }

    [Column(TypeName = "decimal(6,2)")]
    public decimal Fee { get; set; }
}

Also I tried to specify decimal format in OnModelCreating method

builder.Entity<Model>().Property(x => x.Fee).HasColumnType("decimal(6,2)");

Thanks for any advice

@ajcvickers
Copy link
Contributor

@3axap-4 This looks like a different (although related) case than was tracked by this issue--can you please post a new issue including a runnable project/solution or complete code listing that demonstrates what you are seeing?

@ajcvickers
Copy link
Contributor

Closing old issue as this is no longer something we intend to implement.

@ajcvickers ajcvickers removed this from the 5.0.0 milestone Nov 16, 2019
@ajcvickers ajcvickers added closed-out-of-scope This is not something that will be fixed/implemented and the issue is closed. and removed propose-close type-investigation labels Nov 16, 2019
@AndriySvyryd AndriySvyryd removed their assignment Mar 6, 2020
@ajcvickers ajcvickers reopened this Oct 16, 2022
@ajcvickers ajcvickers closed this as not planned Won't fix, can't repro, duplicate, stale Oct 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closed-out-of-scope This is not something that will be fixed/implemented and the issue is closed.
Projects
None yet
Development

No branches or pull requests

7 participants