Quantcast
Channel: Microsoft Dynamics AX Solution Architecture
Viewing all 30 articles
Browse latest View live

How to view which permissions a security role really has in Dynamics 365 for Finance and Operations

$
0
0

The key word in the title of this post is "really" - this isn't about how to look in the AOT or how to open the security forms in the browser - this is about how to check what an AOS is picking up as security permissions for a given role under the hood.

Why would I want to do that I hear you ask? It's useful for me when I'm developing new security elements - because the AOS doesn't see them until I do a full build and database synchronize(sometimes just a synch if everything is already built), and I can't always remember when I last did a build and database synchronize - so it gives me a simple way that I can check what the AOS actually see's for a security role. Also if you're troubleshooting something wrong with security in a deployment environment it gives a way to see what the AOS is seeing.

How can I do it?

Earlier, I created a new privilege, which granted a form control permission to a control called "GroupFinancialDimensionLine" on Forms\SalesTable, then I created a role extension on the Accounts Receivable Clerk role, and granted it my new privilege.

What I want to do now, is see if my AOS knows about it or not - or if I need to run a full build/synch.

Querying my AXDB, first I'm looking up the RecId for the role I modified, then I'm using that RecId to check what permissions are set for SalesTable for that Role - looking in the SecurityRoleRuntime table.


select recId, * from SECURITYROLE where name = 'Accounts receivable clerk'

--query returned recId=13 for that record

SELECT T1.SECURITYROLE,T1.NAME,T1.CHILDNAME,T1.TYPE,T1.CREATEACCESS,T1.READACCESS,T1.UPDATEACCESS,T1.DELETEACCESS,
T1.CORRECTACCESS,T1.INVOKEACCESS,T1.PASTCREATEACCESS,T1.PASTREADACCESS,T1.PASTUPDATEACCESS,T1.PASTDELETEACCESS,T1.PASTCORRECTACCESS,
T1.PASTINVOKEACCESS,T1.CURRENTCREATEACCESS,T1.CURRENTREADACCESS,T1.CURRENTUPDATEACCESS,T1.CURRENTDELETEACCESS,T1.CURRENTCORRECTACCESS,
T1.CURRENTINVOKE,T1.FUTURECREATEACCESS,T1.FUTUREREADACCESS,T1.FUTUREUPDATEACCESS,T1.FUTUREDELETEACCESS,T1.FUTURECORRECTACCESS,
T1.FUTUREINVOKEACCESS,T1.RECVERSION,T1.RECID
FROM SECURITYROLERUNTIME T1
WHERE (SECURITYROLE=13) AND NAME = 'SALESTABLE'

A couple of things to note:

- It's database synchronize that's populating SECURITYROLERUNTIME.
- AOS is using SECURITYROLERUNTIME as it's definition of the detail of each role - this is how it knows what to allow a user to see/do and what not to.
- AOS only reads from the table on startup**, and then it's cached.
- When you're deploying a package to an environment - no further action should be needed - the will be populated if package deployment completes successfully.

In my example, after a database synchronize, I can see my new permission is there, and then when I log in with a user with that permission it works:

**I said that an AOS only reads the table on startup - that's not strictly true, it just made a nicer bullet point. There is a cache synchronizing mechanism between AOS - so that if someone modifies a role/permission in the UI, the other AOSes will pick up the change by re-reading the table:

- Each running AOS has in it's memory a global user role version ID
- It's getting this from a special record in Tables\SysLastValue
- Periodically (every few minutes) it checks the SysLastValue record to see if the ID has changed - meaning has another AOS made a role change, and notified the others by incrementing the global user role version ID stored in this table.
- If it's changed it flushes it's cache and re-reads all the role information from SecurityRoleRuntime

It's a similar type of mechanism that we use for AOS to check their server configuration, batch configuration and EntireTable cache settings/values.


Debug Dynamics 365 for Finance and Operations on-premises with Visual Studio remote debugger

$
0
0

In this article I’m going to explain how to use Visual Studio Remote Debugger to debug a Dynamics 365 for Finance and Operations AOS in an on-premises environment. Why would you want to do that? Well, if you have an issue occurring in an on-premises environment that you can't reproduce on your developer (also known as Tier1/onebox/dev box) environment, this allows you to attach Visual Studio from the developer environment to the on-premises AOS and debug X++ code.

There's another related article on here, to debug an on-premises AOS without Visual Studio, which may be useful depending on your circumstances.

Overview

The basic gist of this process is:
1. Use a D365 developer environment which is on the domain (and of course the network) with the AOS machine
2. Copy the remote debugging tools from developer environment to the AOS
3. Run the remote debugger on the AOS
4. Open Visual Studio on the developer environment and attach to the remote debugger on the AOS
5. From this point debug as normal

First let’s talk about why I’m using a developer environment which is joined to the domain: The remote debugger has a couple of authentication options – you can either set it to allow debugging from anyone (basically no authentication), or to use Windows authentication. It’s a bit naughty to use the no authentication option, although the remote debugger wouldn’t be accessible from the internet, it’s still allowing that access to the machine from the network without any control on it. So we’ll use the Windows authentication option, which means we need to be on the domain.

There’s nothing special about adding a developer environment to the domain, join as you would any other machine - I won't go into that here.

Copy the remote debugger to the AOS

On the developer environment you'll find "Remote Debugger folder" on the Windows start menu:

Copy the x64 folder from there, and paste it onto the AOS you're going to debug. Note that if you have multiple AOS in your on-premises environment, turn off all but one of them - so that all requests will go to that one AOS that you're debugging. Within the folder double click msvsmon.exe:

The remote debugger will open, and look something like this, take note of the machine name and port, in my case it's SQLAOSF1AOS1:4020.

Configure the developer environment

Now move over to the developer environment, log on as an account which is an Administrator of both the developer machine and the AOS machine you want to debug. Open Visual Studio and go to Tools, Options, set the following options:

Dynamics 365, Debugging: Uncheck "Load symbols only for items in the solution"
Debugging, General: Uncheck "just my code"
Debugging, Symbols: add paths for all packages you want to debug, pointing to the location of the symbols files on the AOS you want to debug, because my account is an Administrator on the AOS box I can use the default C$ share to add those paths, like this:

Close the options form, and then go to Debug, Attach to process.., in the window that appears set the qualifier to the machine and port we saw earlier in the remote debugger on the AOS machine, in my case it was SQLAOSF1AOS1:4020. Then at the bottom click "Show processes from all users" and select the AXService.exe process, this is the AOS.

You'll get a warning, click attach.

On the AOS machine, you'll see in the remote debugger that you've connected:

Now open some code and set a breakpoint, in my case I'm choosing Form\CustTable.init(), then open the application in the browser and open the form to hit your breakpoint.

Switching between source code files

When you try to step into a different source file, for example if I want to step from CustTable.init() down into TaxWithholdParameters_IN::find(), then I need to open the code for TaxWithholdParameters_IN manually from the Application explorer (AOT) before I step into it, if you don't do that you'll get a pop up window asking you where the source code file is - if that happens, then you can just cancel the dialog asking for the source file, go open it from the AOT, and then double click on the current row in the call stack to force it to realize you've got the source file now.

Happy debugging!

How to copy a database from cloud tier 1 to on-premises in Dynamics 365 for Finance and Operations

$
0
0

In this post I'm going to explain how to copy a Dynamics 365 for Finance and Operations database from a cloud Tier 1 environment (also known as a onebox, or demo environment) to an on-premises environment. This might be useful if you're using a Tier 1 to create your golden configuration environment, which you'll use to seed the on-premises environments later.

I will post how to move a database in the other direction soon.

Overview

This process is relatively simple compared to the cloud version, because we're not switching between Azure SQL and SQL Server - it's all SQL Server. The basic gist of the process is:
1. Backup the database on the Tier 1 (no preparation needed)
2. Restore the database to the on-premises SQL instance
3. Run a script against the restore DB to update some values
4. Start an AOS in the on-premises and wait for it to automatically run database synchronize and deploy reports

Process

First back up the database on the Tier 1 environment and restore it to the on-premises environment - don't overwrite the existing on-premises database, keep that one and restore the new one with a different name - because we're going to need to copy some values across from the old DB to the new DB.

Now run this script against the newly restored DB, make sure to set the values for the database names correctly:


--Remove the database level users from the database
--these will be recreated after importing in SQL Server.
use AXDB_onebox --******************* SET THE NEWLY RESTORED DATABASE NAME****************************

declare
@userSQL varchar(1000)
set quoted_identifier off
declare userCursor CURSOR for
select 'DROP USER [' + name +']'
from sys.sysusers
where issqlrole = 0 and hasdbaccess = 1 and name != 'dbo' and name != 'NT AUTHORITY\NETWORK SERVICE'
OPEN userCursor
FETCH userCursor into @userSQL
WHILE @@Fetch_Status = 0
BEGIN
exec(@userSQL)
FETCH userCursor into @userSQL
END
CLOSE userCursor
DEALLOCATE userCursor

--now recreate the users copying from the existing database:
use AXDB --******************* SET THE OLD ON-PREMISES DATABASE NAME****************************
go
IF object_id('tempdb..#UsersToCreate') is not null
DROP TABLE #UsersToCreate
go
select 'CREATE USER [' + name + '] FROM LOGIN [' + name + '] EXEC sp_addrolemember "db_owner", "' + name + '"' as sqlcommand
into #UsersToCreate
from sys.sysusers
where issqlrole = 0 and hasdbaccess = 1 and name != 'dbo' and name != 'NT AUTHORITY\NETWORK SERVICE'
go
use AXDB_onebox --******************* SET THE NEWLY RESTORED DATABASE NAME****************************
go
declare
@userSQL varchar(1000)
set quoted_identifier off
declare userCursor CURSOR for
select sqlcommand from #UsersToCreate
OPEN userCursor
FETCH userCursor into @userSQL
WHILE @@Fetch_Status = 0
BEGIN
exec(@userSQL)
FETCH userCursor into @userSQL
END
CLOSE userCursor
DEALLOCATE userCursor

--Storage isn't copied from one environment to another because it's stored outside
--of the database, so clearing the links to stored documents
UPDATE T1
SET T1.STORAGEPROVIDERID = 0
, T1.ACCESSINFORMATION = ''
, T1.MODIFIEDBY = 'Admin'
, T1.MODIFIEDDATETIME = getdate()
FROM DOCUVALUE T1
WHERE T1.STORAGEPROVIDERID = 1 --Azure storage

--Clean up the batch server configuration, server sessions, and printers from the previous environment.
TRUNCATE TABLE SYSSERVERCONFIG
TRUNCATE TABLE SYSSERVERSESSIONS
TRUNCATE TABLE SYSCORPNETPRINTERS

--Remove records which could lead to accidentally sending an email externally.
UPDATE SysEmailParameters
SET SMTPRELAYSERVERNAME = ''
GO
UPDATE LogisticsElectronicAddress
SET LOCATOR = ''
WHERE Locator LIKE '%@%'
GO
TRUNCATE TABLE PrintMgmtSettings
TRUNCATE TABLE PrintMgmtDocInstance

--Set any waiting, executing, ready, or canceling batches to withhold.
UPDATE BatchJob
SET STATUS = 0
WHERE STATUS IN (1,2,5,7)
GO

--SysFlighting is empty in on-premises environments, so clean it up
TRUNCATE TABLE SYSFLIGHTING

--Update the Admin user record, so that I can log in again
UPDATE USERINFO
SET SID = x.SID, NETWORKDOMAIN = x.NETWORKDOMAIN, NETWORKALIAS = x.NETWORKALIAS,
IDENTITYPROVIDER = x.IDENTITYPROVIDER
FROM AXDB..USERINFO x --******************* SET THE OLD ON-PREMISES DATABASE NAME****************************
WHERE x.ID = 'Admin' and USERINFO.ID = 'Admin'

Now the database is ready, we're going to rename the old on-premises database from AXDB to AXDB_old, and the newly restored database from AXDB_onebox to AXDB. This means we don't have to change the AOS configuration to point to a new database - we're using the same users and the same database name.

All we need to do is restart all the AOS processes (either reboot the machines or restart the AOS apps from service fabric explorer).

When the AOSes restart one of them will run a database synchronize & deploy reports - because they can tell the database changed. You can watch progress in the AOS event log – create a custom event log view for all events under “Services and applications\Microsoft\Dynamics”. When this is finished you’ll see a record appear in SF.SYNCLOG in the AXDB.

Notes

A few other things to note:
- Only the Admin user can log in - because I'm assuming that the users from the onebox environment were all AAD cloud users, and that's not what the on-premises environment uses. The script above fixed the Admin user, but left the others as-is.
- To get Management Reporter working again, perform a reset.
- Storage (things like document handling documents) aren't kept in the database, so copy the database hasn't copied those things across. In the script above we cleared the links in the DocuValue table, so that we don't try and open docs from Azure storage which aren't there.
- The script has withheld all batch jobs, to stop anything running which shouldn't.
- Data stored in fields that were encrypted in the Tier1 environment, won't be readable in the restored database - there aren't many fields that are like this, details are in the "Document the values of encrypted field" section here.

How to copy a database from on-premises to to cloud Tier 1 in Dynamics 365 for Finance and Operations

$
0
0

In this post I'll explain how to copy a database from an on-premises environment, and restore it to a Tier1 (also know as onebox, dev box) environment. Why would you want to do that? Well typically so that you have some realistic data to develop against, or to debug a problem that you can only reproduce with the data.

If you've already read this post about copying a database in the other direction - tier1 to on-premises, then this process will be very familiar.

Overview

This process is relatively simple compared to the cloud version, because we're not switching between Azure SQL and SQL Server - it's all SQL Server. The basic gist of the process is:
1. Backup the database on the on-premises environment (no preparation needed)
2. Restore the database to the Tier 1 environment
3. Run a script against the restore DB to update some values
4. Open Visual Studio and run a database synchronize

Process

First back up the database on the on-premises environment and restore it to the Tier 1 environment - don't overwrite the existing Tier 1 database, keep that one and restore the new one with a different name - because we're going to need to copy some values across from the old DB to the new DB.

Now run this script against the newly restored DB, make sure to set the values for the database names correctly:


--Remove the database level users from the database
--these will be recreated after importing in SQL Server.
use AXDB_onpremises --******************* SET THE NEWLY RESTORED DATABASE NAME****************************

declare
@userSQL varchar(1000)
set quoted_identifier off
declare userCursor CURSOR for
select 'DROP USER [' + name +']'
from sys.sysusers
where issqlrole = 0 and hasdbaccess = 1 and name != 'dbo' and name != 'NT AUTHORITY\NETWORK SERVICE'
OPEN userCursor
FETCH userCursor into @userSQL
WHILE @@Fetch_Status = 0
BEGIN
exec(@userSQL)
FETCH userCursor into @userSQL
END
CLOSE userCursor
DEALLOCATE userCursor

--now recreate the users copying from the existing database:
use AXDB --******************* SET THE OLD TIER 1 DATABASE NAME****************************
go
IF object_id('tempdb..#UsersToCreate') is not null
DROP TABLE #UsersToCreate
go
select 'CREATE USER [' + name + '] FROM LOGIN [' + name + '] EXEC sp_addrolemember "db_owner", "' + name + '"' as sqlcommand
into #UsersToCreate
from sys.sysusers
where issqlrole = 0 and hasdbaccess = 1 and name != 'dbo' and name != 'NT AUTHORITY\NETWORK SERVICE'
go
use AXDB_onpremises --******************* SET THE NEWLY RESTORED DATABASE NAME****************************
go
declare
@userSQL varchar(1000)
set quoted_identifier off
declare userCursor CURSOR for
select sqlcommand from #UsersToCreate
OPEN userCursor
FETCH userCursor into @userSQL
WHILE @@Fetch_Status = 0
BEGIN
exec(@userSQL)
FETCH userCursor into @userSQL
END
CLOSE userCursor
DEALLOCATE userCursor

--Storage isn't copied from one environment to another because it's stored outside
--of the database, so clearing the links to stored documents
UPDATE T1
SET T1.STORAGEPROVIDERID = 0
, T1.ACCESSINFORMATION = ''
, T1.MODIFIEDBY = 'Admin'
, T1.MODIFIEDDATETIME = getdate()
FROM DOCUVALUE T1
WHERE T1.STORAGEPROVIDERID = 4 --Files stored in local on-premises storage

--Clean up the batch server configuration, server sessions, and printers from the previous environment.
TRUNCATE TABLE SYSSERVERCONFIG
TRUNCATE TABLE SYSSERVERSESSIONS
TRUNCATE TABLE SYSCORPNETPRINTERS

--Remove records which could lead to accidentally sending an email externally.
UPDATE SysEmailParameters
SET SMTPRELAYSERVERNAME = ''
GO
UPDATE LogisticsElectronicAddress
SET LOCATOR = ''
WHERE Locator LIKE '%@%'
GO
TRUNCATE TABLE PrintMgmtSettings
TRUNCATE TABLE PrintMgmtDocInstance

--Set any waiting, executing, ready, or canceling batches to withhold.
UPDATE BatchJob
SET STATUS = 0
WHERE STATUS IN (1,2,5,7)
GO

--Update the Admin user record, so that I can log in again
UPDATE USERINFO
SET SID = x.SID, NETWORKDOMAIN = x.NETWORKDOMAIN, NETWORKALIAS = x.NETWORKALIAS,
IDENTITYPROVIDER = x.IDENTITYPROVIDER
FROM AXDB..USERINFO x --******************* SET THE OLD TIER 1 DATABASE NAME****************************
WHERE x.ID = 'Admin' and USERINFO.ID = 'Admin'

Now the database is ready, we're going to rename the old Tier 1 database from AXDB to AXDB_old, and the newly restored database from AXDB_onpremises to AXDB. This means we don't have to change the AOS configuration to point to a new database - we're using the same users and the same database name.

NOte that to do the rename, you'll need to stop the Management reporter, batch, IIS and/or iisexpress services - otherwise it'll say the database is in use.

Then open Visual Studio and run a database synchronize. A tier 1 environment doesn't have the same auto-DB-synch mechanism like the on-premises environment does, so you have to run it yourself.

Notes

A few other things to note:
- Only the Admin user can log in - because I'm assuming that the users from the onebox environment were all AAD cloud users, and that's not what the on-premises environment uses. The script above fixed the Admin user, but left the others as-is.
- To get Management Reporter working again, perform a reset.
- Storage (things like document handling documents) aren't kept in the database, so copy the database hasn't copied those things across. In the script above we cleared the links in the DocuValue table, so that we don't try and open docs from local on-premises storage which aren't there.
- The script has withheld all batch jobs, to stop anything running which shouldn't.
- Data stored in fields that were encrypted in the Tier1 environment, won't be readable in the restored database - there aren't many fields that are like this, details are in the "Document the values of encrypted field" section here.

How to use Environment Monitoring View Raw Logs

$
0
0

This document explains how to use the "view raw logs" feature in LCS environment monitoring for your Cloud Dynamics 365 for Finance and Operations environments, this is the ability for you to look at some of the various telemetry data we record from your environments (for example slow queries) to give you insight into issues you might have, or crucially to react proactively before anyone notices there's an issue.

So what is this view raw logs?

Physically "view raw logs" is a button in LCS which shows you various telemetry data taken from your environment, things like long running queries. In the background this is surfacing telemetry data gathered from the environment - for all Microsoft-hosted cloud environments we're gathering telemetry data constantly. This is via instrumentation in our application, we are gathering a huge number of rows per hour from a busy environment. We store this in a Big Data solution in the cloud, this is more than just a SQL Database somewhere, as we are quite literally gathering billions and billions of rows per day, it's a pretty special system.

Timings - how quickly does it show in LCS and long is it kept for?

There is approximately a 10 minute delay between capturing this data from an environment and being able to view it in LCS.

Data is available in LCS for 30 days - so you always have a rolling last 30 days.

A few limitations/frequently asked questions

- Is it available for on-premises? Not available for on-premises and not on the roadmap yet. This feature relies on uploading telemetry data to Microsoft cloud, so it doesn't feel right for on-premises.
- Is it available for ALL other environments? It's available for environments within your Implementation project - so Tier1-5 environments and Production. It's not available for environments you download or host on your own Azure subscription.
- Doesn't Microsoft monitor and fix everything for me so I don't need to look at anything? This can be a sensitive subject; Microsoft are monitoring production, and will contact you if they notice an issue which you need to resolve (that's quite new). Customer/Partner still own their code, and Microsoft won't change your code for you. During implementation and testing, you're trying to make sure all is good before you go-live, this is useful during that period too. So the reality is it's a little bit on all parties.
- Is there business data shown/stored in telemetry? No. From a technical perspective this does mean things like user name and infolog messages are not shown, which as a Developer is annoying, but understandable.

Where to find view raw logs?

From your LCS project, click on "Full details" next to the environment you want to view, a new page opens, then scroll to the bottom of the page and click "Environment monitoring" link, a new page opens, click "view raw logs" button (towards the right hand side), now you're on the View raw logs page!

Here's a walkthrough:



Explanation of the search fields

See below:

How to use "search terms" for a query?

This field allows you to search for any value in any column in the report. A common example would be looking for an Activity ID from an error message you get, for example:

An activity ID can be thought of as the ID which links together the log entries for a particular set of actions a user was taking – like confirming a PO. If you add a filter on this in the “All logs” query, as below, then you’ll see all logs for the current activity the user was performing – this is showing all events tagged with that activityId.

Tell me what each query does!

All logs

This query can be used to view all events for a giving user’s activity ID. If a user had a problem and saved the activity ID for you, then you can add it in the “search terms” box in this query and see all events for the process they were performing at the time. The exceptionMessage and exceptionStacktrace are useful for a Developer to understand what may have caused a user’s issue, these are populated when the TaskName= AosXppUnhandledException

All error events

This is a version of “All logs” which is filtered to only show TaskName=CliError, which means Infolog errors. There is only one column available on this report which isn’t already available in “All logs” which is eventGroupId, which serves no practical purpose. It is not possible to identify which users had the errors (user isn't captured directly on this TaskName). It is not possible to see the Infolog message shown to the user (because it could have contained business data so can't be captured). The callstack column shows the code call stack leading to the error.

User login events

This shows when user sessions logged on and off. The user IDs have been anonymized as GUIDs so to track them back to actual users, in the "Users" form inside Dynamics look at the "telemetry ID" field, on environments where you have database access you can look in the USERINFO table at OBJECTID. The report is pre-filtered to show 7 days activity from the end date of your choosing. There is a maximum limit of 10000 rows, the report isn’t usable if you have over 10k in 10 days.

This report could be useful to make statistics about the number of unique users using the system per day/week/month, by dumping the results to excel and aggregating it.

Error events for a specific form

This shows all TaskName=CliError (Infolog errors) for a specific form name you search for. The form name is the AOT name of the form, e.g. TrvExpenses, not the name you see on the menu, e.g. Expenses.
The call stack is visible for any errors found. This can be useful when users are reporting problems with a particular form, but they haven't given you an ActivtyId from an error message - using this query you can still find errors/call stacks related to the form.

Slow queries

This shows all slow queries for a time period. SQL query and call stack is shown. The Utilization column is the total estimated time (AvgExecutionTimeinSeconds * ExecutionCount) – we're calling it "estimated" because it’s using the average execution time and not the actual time. Queries over 100ms are shown.

This one is one of my favourites, it's very useful to run after a set of testing has completed to see how query performance was. A Developer can easily see where long queries might be coming from (because SQL and call stack are given) and take action.

SQL Azure connection outages

Shows when SQL Azure was unavailable. This is very rare though, I've never seen it show any data.

Slow interactions

Ironically the "slow interactions" query takes a long time to run! The record limit isn’t respected on this query – it shows all records regardless. This means if you try to run it for longer periods it’ll fail with “query failed to execute” error message as the result set is too large, run it in small date ranges to prevent this.
This one includes the slow query data I mentioned earlier, and also more form related information, so what this one can give you is a rough idea of the buttons a user pressed (or I should say form interactions to be more technically correct) leading up to the slow query. If you're investigating a slow query, looking for it here will give you a bit more context about the form interactions.

Is batch throttled

This shows whether batches were throttled. Batch throttling feature will prevent new batches from running temporarily if the resource limits set within the batch throttling feature are exceeded - this is to try and limit the resources that a batch process can use, to ensure that sufficient resources are available for user sessions. The infoMessage column in this report shows which resource was exceeded.
Generally speaking you shouldn't hit the throttling limits - if you see data in here, it's likely you have a runaway batch job on your hands - find out which one and look at why it's going crazy.

Financial reporting daily error summary

Shows an aggregated summary of errors from the Financial Reporting processing service (used to be called Management Reporter). This gives you a fast view if anything is wrong with Financial Reporting, this is hard-coded to filter for today, but as processing runs every 5 minutes in the background that is ok. Typically use this if a user reports something is wrong/missing in Financial Reporting, to get a quick look at if any errors are being reported there.

Financial reporting long running queries

This shouldn't return any data normally - it might do if a reset has been performed on Financial reporting and it's rebuilding all of it's data. Generally for customers and partners I would recommend not to worry about this one, it's more for Microsoft's benefit.

Financial reporting SQL query failures

Again this one shouldn't return data normally. This helps to catch issues such as, when copying databases between environments if change tracking has been re-enabled, then Financial reporting will be throwing errors when it tries to make queries against change tracking.

Financial reporting maintenance task heartbeat

The Financial reporting service reports a heartbeat once a minute to telemetry to prove it's running ok. This report shows that data summarized - so it has 1 row per hour, and should show 60 count for each row (i.e. one per minute). This allows you to see if the service is running and available. Note that the report doesn't respect the row limit, but as it's aggregated it doesn't cause a problem.

Financial reporting data flow

For those of you familiar with the old versions of Financial Reporting (or management reporter), this is similar to the output you used to get in the UI of the server app, where you can see the various integration tasks and whether they ran ok, and how many records they processed. This is useful for checking if the integration is running correctly or if one of the jobs is failing. Note that this query also ignores the row limit, so run it for a shorter time period or it'll run for a long time.

Financial reporting failed integration records

I'd skip this one, it's showing just the timestamp and name for each integration task (similar to the "Financial reporting data flow" query above, but with less information), the name suggests it shows only failures, but actually it shows all rows regardless. Use the "Financial reporting data flow" query instead.

All events for activity

You can skip over this one - it's very similar to the "All logs" query – but it also has SQL server name and SQL database name, which are irrelevant as you’ve already chosen an environment to view it so you know which server and database it is.

All crashes

This shows AOS crashes, it tells you is how many crashes there were but it’s not directly actionable from here. If you have data here, log a support ticket with Microsoft - on the Microsoft side we have more data available about the crash which means it is actionable. Microsoft are proactively investigating crash issues we see through telemetry. Keeping up to date on platform updates helps prevent seeing crashes.

All deadlocks in the system

The title of this query is odd "in the system", ahh thanks for clarifying I thought it was all deadlocks in the world. This shows SQL deadlocks, and gives the SQL statement and call stack. You can use this similarly to the "Slow queries" query, for example after a round of testing has completed, review this log to check whether the tested code was generating deadlocks anywhere - and if it is then investigate the related X++ code.

Error events for activity

This is a filtered versions of the query “All events for activity” showing only errors, which itself is a version of "All logs" - it means if you've been given an ActivityId you could use this one to jump straight to only the error events relating to that activity - whereas "All logs" would show you errors + other data.

Distinct user sessions

This one shows, for each user, how many sessions they’ve had during a time period. You could use this to look at user adoption of the environment - the number of unique users per day/week/month - see if users are actually using it. It is similar to "User login events", just aggregated.

All events for user

This one is named in a confusing way – really it is showing user interaction events for a user – so it’ll show you everything a user pressed in forms in the time period. The tricky thing is that user IDs are obfuscated so you need to find the GUID for the user first – look it up in the "Users" form inside Dynamics. You might use this to see what a particular user was doing during a period, if you're trying to reproduce something they've reported and the user isn't being very forthcoming with information. The information shown here is a little difficult to interpret, it's very much Developer focused.

All events for browser session

This allows you to look up results using the session ID from an error message - remember right at the beginning of this article the screenshot about how to use the ActivityId from an error message - well also in that message was a "Session ID" this query let's you show logs for the session. Imagine an Activity ID is a set of related events within a session, and a Session ID is the overarching session containing everything while the user was logged in that time.

Find the official page on Monitoring and Diagnostics here.

How to link SQL SPID to user in Dynamics 365 for Finance and Operations on-premises

$
0
0

Quick one today! How to link a SQL SPID back to a Dynamics user in Dynamics 365 for finance and operations on-premises. You use this when, for example, you have a blocking SQL process, and you want to know which user in the application triggered it - this allows you to look up the blocking SPID and find out which user.

Run this SQL:


select cast(context_info as varchar(128)) as ci,* from sys.dm_exec_sessions where session_id > 50 and login_name = 'axdbadmin'

First column, shows the Dynamics user. It's much like it was in AX2012, except you don't need to go set a registry key first.

You can do the same thing in the Cloud version, but there you don't need to do it in TSQL because in LCS you can go to Monitoring and "SQL Now" tab where you can see SPID to user for running SQL.

How to connect to SQL on BI servers in a Dynamics 365 for Finance and Operations environment

$
0
0

Another quick one - I had trouble this week connecting to the local SQL Server instance on the BI server in my Dynamics 365 for Finance and Operations cloud environment.

I was investigating an SQL Server Reporting Services (SSRS) issue and I wanted to be able to look at the execution logs in the SSRS database.

Looking at the SSRS configuration on the box it appeared that SSRS itself was connecting to the database as Network Service, but that didn't help me when trying to connect using SQL Server Management Studio (SSMS) myself, so I was doubting whether I could ever access the local SQL instance there.

In the end I realized there is a very simple solution - if you're logged on as the local Admin account, and run SSMS normally, login to SQL will fail, but if you run SSMS as Administrator, then login to SQL will work fine (just using Windows authentication, the local Admin account is a SQL admin).

How to scale out Dynamics 365 for Finance and Operations on-premises

$
0
0

How to scale out Dynamics 365 for Finance and Operations on-premises

 

In this post I’m going to explain how to scale out Dynamics 365 for Finance and Operations on-premises by adding new VMs to your instance.

 

Overview

The process is quite straight forward and Service Fabric is going to do the remaining jobs once a new node added to Service Fabric Cluster. In this post, I’m going to showcase it by adding a new AOS node to an existing Dynamics 365 for Finance and Operations 7.3 with Platform Update 12 on-premises instance. Basically, the procedure is as follows.

  1. Update Dynamics 365 for Finance and Operations on-premises configurations for new AOS node
  2. Setup new AOS machine for Dynamics 365 for Finance and Operations on-premises
  3. Add new AOS machine as an AOS node in Service Fabric Cluster
  4. Verify new AOS node is functional

Prerequisites

  1. New AOS machine must fulfill the system requirements documented in here
  2. Basic configurations on new AOS machine like join domain, IP assignment, enable File and printer sharing… are done

Procedures

Update Dynamics 365 for Finance and Operations on-premises configurations for new AOS node

  1. Update ConfigTemplate to include new AOS node. For detailed instructions, please refer to documentation in here.
    a. Identify which fault and upgrade domain new AOS node will belong to
    b. Update AOSNodeType section to include new AOS machine
  2. Add A record for new AOS node in DNS Zone for Dynamics 365 for Finance and Operations on-premises. For detailed instructions, please refer to documentation in here.
  3. Run cmdlet Update-D365FOGMSAAccounts to update Grouped Service Accounts. For detailed instructions, please refer to documentation in here.
  4. Grant Modified permission of file share aos-storage to new AOS machine. For detailed instructions, please refer to documentation in here.

Setup new AOS machine for Dynamics 365 for Finance and Operations on-premises

  1. Install prerequisites. For detailed instructions, please refer to documentation in here
  2. a. Integration Services
    b. SQL Client Connectivity SDK

  3. Add gMSA svc-AXSF$ and domain user AxServiceUser to local administrators group
  4. Setup VM. For detailed instructions, please refer to documentation in here.
  5. a. Copy D365FFO-LBD folder from an existing AOS machine, then run below steps in powershell as an administrator from D365FFO-LBD folder

    NOTE: D365FFO-LBD folder is generated by cmdlet Export-Scripts.ps1 when deploy Dynamics 365 for Finance and Operations on-premises per document in here

    b. Run Configure-PreReqs.ps1 to install pre-req softwares on new AOS machine
    c. Run below cmdlets to complete pre-reqs on new AOS machine
    .\Add-GMSAOnVM.ps1
    .\Import-PfxFiles.ps1
    .\Set-CertificateAcls.ps1

  6. Run Test-D365FOConfiguration.ps1 to verify all setup is done correctly on new AOS machine
  7. Install ADFS certificate and SQL Server certificate
  8. a. Install ADFS SSL certificate to Trusted Root Certification Authorities of Local Machine store
    b. Install SQL Server (the .cer file) in Trusted Root Certification Authorities of Local Machine store

Add new AOS machine as an AOS node in Service Fabric Cluster

  1. The full instructions about how to add or remove a node in a existing Service Fabric Cluster could be found in here. Below steps are performed in new AOS machine.
  2. Download, unblock and unzip the same version of standalone package for Service Fabric for Window Server for existing Server Fabric Cluster
  3. Run Powershell with elevated privileges, and navigate to the location of the unzipped package in above step
  4. Run below cmdlet to add new AOS machine as an AOS node in Service Fabric cluster

  5. .\AddNode.ps1 -NodeName <AOSNodeName> -NodeType AOSNodeType -NodeIPAddressorFQDN <NewNodeFQDNorIP> -ExistingClientConnectionEndpoint <ExistingNodeFQDNorIP>:19000 -UpgradeDomain <UpgradeDomain> -FaultDomain <FaultDomain> -AcceptEULA -X509Credential -ServerCertThumbprint <ServiceFabricServerSSLThumbprint> -StoreLocation LocalMachine -StoreName My -FindValueThumbprint <ServiceFabricClientThumbprint>

    Note the following elements in above cmdlet

    AOSNodeName – Node name of a Service Fabric Cluster. Refer to configuration file or Service Fabric Cluster explorer to see how existing AOS nodes named
    AOSNodeType – the node type of new node is
    NewNodeFQDNorIP – FQDN or IP of new node
    ExistingNodeFQDNorIP – FQDN or IP of an existing node
    UpgradeDomain – upgrade domain specified in ConfigTemplate for new node
    FaultDomain – fault domain specified in ConfigTemplate for new node
    ServiceFabricServerSSLThumbprint – thumbprint of Service Fabric server certificate, star.d365ffo.onprem.contoso.com
    ServiceFabricClientThumbprint – thumbprint of Service Fabric client certificate, client.d365ffo.onprem.contoso.com
    Local Machine, My – where certificates installed

    NOTE: Internet access is required as AddNode.ps1 script will download Service Fabric runtime package

  6. Once new node added, set anti-virus exclusions to exclude Service Fabric directories and processes
  7. Get and edit existing Service Fabric Configuration once new node synced
  8. a. Run below cmdlet to connect to Service Fabriccluster

    $ClusterName= "<ExistingNodeFQDNorIP>:19000"
    $certCN ="<ServiceFabricServerCertificateCommonName>"
    Connect-serviceFabricCluster -ConnectionEndpoint $ClusterName -KeepAliveIntervalInSec 10 -X509Credential -ServerCommonName $certCN -FindType FindBySubjectName -FindValue $certCN -StoreLocation LocalMachine -StoreName My

    Note the following element in above cmdlet

    ExistingNodeFQDNorIP – FQDN or IP of an existing node
    ServiceFabricServerCertificateCommonName – common name of Service Fabric Server certificate, *.d365ffo.onprem.contoso.com
    Local Machine, My – where certificate installed

    b. Run cmdlet Get-ServiceFabricClusterConfiguration and save output as a JSON file
    c. Update ClusterConfigurationVersion with new version number in JSON file
    d. Remove WindowsIdentities section from JSON file

    e. Remove EnableTelemetry

    f. Remove FabricClusterAutoupgradeEnabled

  9. Start Service Fabric configuration upgrade
  10. a. Run below cmdlet to start Service Fabric configuration upgrade

    Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath <Path to Configuration File>;

    b. Run below cmdlet to monitor upgrade progress

    Get-ServiceFabricClusterUpgrade

Verify new AOS is functional

  1. Confirm new AOS machine is added as AOS node successfully
  2. Before After


  3. Validate new AOS is functional as expected

Cleanup routines in Dynamics 365 for Finance and Operations

$
0
0

In Dynamics 365 for Finance and Operations cleanup routines are available across various modules within the product. It is important to note that these cleanup routines should be only executed after detailed analysis and confirmation from the business this data is no longer needed. Also always test each routine first in test environment prior executing it in production. This article provides an overview on what is available today.

 

System administration

Module

Path

Description

System administration

Periodic tasks > Notification clean up

This is used to periodically delete records from tables EventInbox and EventInboxData. Recommendation would also be if you don’t use Alert functionality to disable Alert from Batch job.

System administration

Periodic tasks > Batch job history clean-up

The regular version of batch job history clean-up allows you to quickly clean all history entries older than a specified timeframe (in days). Any entry that was created prior to – will be deleted from the BatchJobHistory table, as well as from linked tables with related records (BatchHistory and BatchConstraintsHistory). This form has improved performance optimization because it doesn’t have to execute any filtering.

System administration

Periodic tasks > Batch job history clean-up (custom)

The custom batch job clean-up form should be used only when specific entries need to be deleted. This form allows you to clean up selected types of batch job history records, based on criteria such as status, job description, company, or user. Other criteria can be added using the Filter button.

 

Data management

Module

Path

Description

Data management

Data management workspace > “Staging cleanup” tile

Data management framework makes us of staging tables when running data migration. Once data migration is completed then this data can be deleted using "Staging cleanup" tile.

 

Warehouse management

Module

Path

Description

Warehouse management

Periodic tasks > Clean up > Work creation history purge

This is used to delete work creation history records from WHSWorkCreateHistory table based on number of days to keep the history provided on dialog.

Warehouse management

Periodic tasks > Clean up > Containerization history purge

This is used to delete containerization history from WHSContainerizationHistory table based on number of days to keep the history provided on dialog.

 

Warehouse management

Periodic tasks > Clean up > Wave batch cleanup

This is used to clean up batch job history records related to Wave processing batch group.

Warehouse management

Periodic tasks > Clean up > Cycle count plan cleanup

This is used to clean up batch job history records related to Cycle count plan configurations.

Warehouse management

Periodic tasks > Clean up > Mobile device activity log cleanup

This is used to delete mobile device activity log records from WHSMobileDeviceActivityLog table based on number of days to keep the history provided on dialog.

Warehouse management

Periodic tasks > Clean up > Work user session log cleanup

This is used to delete work user session records from WHSWorkUserSessionLog table based on number of hours to keep provided on dialog.

 

Inventory management

Module

Path

Description

Inventory management

Periodic tasks > Clean up > Calculation of location load

WMSLocationLoad table is used in tracking weight and volume of items and pallets. Summation of load adjustments job can be run to reduce the number of records in the WMSLocationLoad table and improve performance.

Inventory management

Periodic tasks > Clean up > Inventory journals cleanup

It is used to delete posted inventory journals.

Inventory management

Periodic tasks > Clean up > Inventory settlements cleanup

 

It is used to group closed inventory transactions or delete canceled inventory settlements. Cleaning up closed or deleted inventory settlements can help free system resources.

Do not group or delete inventory settlements too close to the current date or fiscal year, because part of the transaction information for the settlements is lost.

Closed inventory transactions cannot be changed after they have been grouped, because the transaction information for the settlements is lost.

Canceled inventory settlements cannot be reconciled with finance transactions if canceled inventory settlements are deleted.

Inventory management

Periodic tasks > Clean up > Inventory dimensions cleanup

This is used to maintain the InventDim table. To maintain the table, delete unused inventory dimension combination records that are not referenced by any transaction or master data. The records are deleted regardless of whether the transaction is open or closed.

Inventory dimension combination record that is still referenced cannot be deleted because when an InventDim record is deleted, related transactions cannot be reopened.

Inventory management

Periodic tasks > Clean up > Dimension inconsistency cleanup

This is used to resolve dimension inconsistencies on inventory transactions that have been financially updated and closed. Inconsistencies might be introduced when the multisite functionality was activated during or before the upgrade process. Use this batch job only to clean up the transactions that were closed before the multisite functionality was activated. Do not use this batch job periodically.

Inventory management

Periodic tasks > Clean up > On-hand entries cleanup

This is used to delete closed and unused entries for on-hand inventory that is assigned to one or more tracking dimensions. Closed transactions contain the value of zero for all quantities and cost values, and are marked as closed. Deleting these transactions can improve the performance of queries for on-hand inventory. Transactions will not be deleted for on-hand inventory that is not assigned to tracking dimensions.

Inventory management

Periodic tasks > Clean up > Warehouse management on-hand entries cleanup

Deletes records in the InventSum and WHSInventReserve tables. These tables are used to store on-hand information for items enabled for warehouse management processing (WHS items). Cleaning up these records can lead to significant improvements of the on-hand calculations.

Inventory management

Periodic tasks > Clean up > On-hand entries aggregation by financial dimensions

Tool to aggregate InventSum rows with zero quantities.

This is basically extending the previously mentioned cleanup tool by also cleaning up records which have field Closed set to True!

The reason why this is needed is basically because in certain scenarios, you might have no more quantities in InventSum for a certain combination of inventory dimensions, but there is still a value. In some cases, these values will disappear, but current design does allow values to remain from time to time.

If you for example use Batch numbers, each batch number (and the combined site, warehouse, etc.) creates a new record in InventSum. When the batch number is sold, you will see quantity fields are set to 0. In most cases, the Financial/Physical value field is also set to 0, but in Standard cost revaluation or other scenarios, the value field may show some amount still. This is valid, and is the way Dynamics 365 for Finance and Operations handles the costs on Financial inventory level, e.g. site level.

Inventory value is determined in Dynamics 365 for Finance and Operations by records in InventSum, and in some cases Inventory transactions (InventTrans) when reporting inventory values in the past. In the above scenario, this means that when you run inventory value reports, Dynamics 365 for Finance and Operations looks (initially) at InventSum and aggregates all records to Site level, and reports the value for the item per site. The data from the individual records on Batch number level are never used. The tool therefore goes through all InventSum records, finds the ones where there is no more quantity (No open quantities field is True). There is no reason to keep these records, so Dynamics 365 for Finance and Operations finds the record in InventSum for the same item which has the same Site, copies the values from the Batch number level to the Site level, and deletes the record. When you now run inventory value reports, Dynamics 365 for Finance and Operations still finds the same correct values. This reduced number of InventSum records significantly in some cases, and can have a positive impact on performance of any function which queries this table. 

Inventory management

Periodic tasks > Clean up > Cost calculation details

Used to clean up cost calculation details.

 

General ledger

Module

Path

Description

General ledger

Periodic tasks > Clean up ledger journals

It deletes general ledger, accounts receivable, and accounts payable journals that have been posted. When you delete a posted ledger journal, all information that’s related to the original transaction is removed. You should delete this information only if you’re sure that you won’t have to reverse the ledger journal transactions.

 

Sales and marketing

Module

Path

Description

Sales and marketing

Periodic tasks > Clean up > Delete sales orders

It deletes selected sales orders.

Sales and marketing

Periodic tasks > Clean up > Delete quotations

It deletes selected quotations.

Sales and marketing

Periodic tasks > Clean up > Delete return orders

It deletes selected return orders.

Sales and marketing

Periodic tasks > Clean up > Sales update history cleanup

It deletes old update history transactions. All updates of confirmations, picking lists, packing slips, and invoices generate update history transactions. These transactions ca be viewed in the History on update form.

Sales and marketing

Periodic tasks > Clean up > Order events cleanup

Cleanup job for order events. Next step is to remove the not needed order events check-boxes from Order event setup form.

 

Production control

Module

Path

Description

Production control

Periodic tasks > Clean up > Production journals cleanup

It is used to delete unused journals.

Production control

Periodic tasks > Clean up > Production orders cleanup

It is used to delete production orders that are ended.

Production control

Periodic tasks > Clean up > Clean up registrations

It is recommended to clean up registrations periodically. The clean-up function does not delete data that is not processed. Make sure that you do not delete registrations that may be required later for documentation purposes.

Production control

Periodic tasks > Clean up > Archive future registrations

It is used to remove future registrations from the raw registrations table.

 

Procurement and sourcing

Module

Path

Description

Procurement and sourcing

Periodic tasks > Clean up > Purchase update history cleanup

This is used to delete all updates of confirmations, picking lists, product receipts, and invoices generate update history transactions.

Procurement and sourcing

Periodic tasks > Clean up > Delete requests for quotations

It is used to delete requests for quotation (RFQs) and RFQ replies. The corresponding RFQ journals are not deleted, but remain in the system.

Procurement and sourcing

Periodic tasks > Clean up > Draft consignment replenishment order journal cleanup

It is used to cleanup draft consignment replenishment order journals.

 

Introduction to troubleshooting Dynamics 365 Operations Mobile Application

$
0
0

Recently I had to look into an issue with the Dynamics 365 for Finance and Operations mobile application, specifically I was looking at the "normal" mobile application, not the special warehousing one, so I thought I'd share what I learnt.

My initial impression coming to the mobile app, was that when I'm publishing mobile workspaces, and then running them on my mobile device, that most of the code was running on the mobile device, and that I'd have to do something fancy to see what it was doing.

That was completely wrong! All X++ logic is still on the AOS (sounds obvious now!), the mobile application is just displaying results back to the user. That means my first tip is to use TraceParser to trace what the application is doing - same as looking at an issue in the desktop browser, the trace will show X++ running, SQL queries, timings etc..

My second tip is related to the first - attach to the related AOS and debug the X++ logic behind the mobile workspace. Using TraceParser first will show which forms/classes its using so you can set your breakpoints in the right places.

The particular issue I was looking into wasn't related to X++ logic though - the problem was if I logged into the mobile application it worked fine, but if I signed out, then I couldn't log in again without uninstalling and reinstalling the app. For this issue I wanted to see how the mobile app was communicating with the outside world - with ADFS (this happened to be on-premises), with the AOS. Normally in the desktop browser I'd use Fiddler to see what calls were being made and pick up errors in communication, the good news is that you can do the same with a mobile, you just connect he device and your laptop to the same WiFi and then set the device to use Fiddler on your laptop as a proxy (as described here). This setup gives you the ability to make tests on your device and see the results immediately in Fiddler on your laptop, just like you would with the desktop browser.

It is also possible to debug the code running on the device itself, but I didn't need to do that for my issue, so saving that for a rainy day.

Viewing all 30 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>