SSIS Data Streaming Destination to PowerBI

Utilizing the SSIS Data Streaming Component to process Tabular Data for PowerBI

Prerequisites:

• WideWorldImporters Data warehouse Copy

• SQL Server 2016

• PowerBI Desktop

• Visual Studio 2015

Data Streaming Destination Description: “The Data Streaming Destination is a SQL Server Integration Services (SSIS) destination component that lets the OLE DB Provider for SSIS consume output of an SSIS package as a tabular result set. You can create a linked server that uses the OLE DB Provider for SSIS and then run a SQL query on the linked server to display data returned by the SSIS package.”

Data Feed Publishing Description: “The Data Feed Publishing Components include the following components: OLE DB Provider for SSIS, Data Streaming Destination, and SSIS Package Publish Wizard. The wizard lets you publish an SSIS package as a SQL view in a SQL Server database instance. The wizard helps you with creating a linked server that uses the OLE DB Provider for SSIS and a SQL view that represents a query on the linked server. You run the view to query results from the SSIS package as a tabular data set.”

In this tutorial, we will:

• Build an SSIS Package with the Data Streaming Destination Component

• Deploy the SSIS Package to the SSIS Catalog

• Publish the SSIS Package as a SQL View using the SSIS Package Publish Wizard

• Use the Published View as a Source in PowerBI

• Query the Deployed Package Directly

1.      Build an SSIS Package with the Data Streaming Destination Component

1.      For the Data Flow Task:

a.       Create an OLE DB Source that points to our WideWorldImporters Datawarehouse:

Use the following SQL in the SQL Command for Data access mode:

   

 

 

1.     Create a Data Streaming Destination Task

SELECTDimension.Customer.Customer,

Dimension.Customer.Category,

[Buying Group],[Postal Code],

COUNT(f_o.[Stock Item Key]) AS TotalItemsPurchased,

SUM([Recommended Retail Price]) AS TotalCostPerItem

 FROM fact.[Order] f_o

INNER JOIN Dimension.City ON City.[City Key] = f_o.[City Key]

INNER JOIN Dimension.Customer ON Customer.[Customer Key] = f_o.[Customer Key]

INNER JOIN Dimension.[Stock Item] ON [Stock Item].[Stock Item Key] = f_o.[Stock Item Key]

GROUP BY Customer ,

 Category ,

 [Buying Group] ,

 [Postal Code]

ORDER BY Dimension.Customer.Customer

2.     Create a Data Streaming Destination Task

 

blog03.jpg

3.     Deploy the SSIS Package to an SSIS Catalog

·         Deploying to the PowerBI_Deployments Folder where our SSIS Package will be housed.

4.  Publish the SSIS Package as a SQL View using the SSIS Package Publish Wizard

From the Windows\Programs menu open:

Microsoft SQL Server 2016 \ SQL Server 2016 Data Feed Publishing Wizard

5.       Package Settings: Specify the Deployed SSIS Package with the SSIS Datastream Transformation

a.       Parameters:

    i.      You can define any Package, Project and Connection Parameters here.

*Recommended to bind sensitive parameter to environment variables to ensure the value of the sensitive parameter is not store din plain text format in the SQL view that will be create by the view.

6.       Publish Settings:

a.       View: You can define your View name here.

b.       LinkedServer: The linked server that the view uses to query the SSIS Package.

c.        Use32BitRunTime: Indicates whether the packages uses 32 bit or 64 bit runtime.

d.       Timeout: Specify timeout in seconds.

e.       ADVANCED:

i.      Schema: specifies the Schema to use on the destination server.

ii.      Encrypt: Specify if data should be encrypted before sending it over the network.

iii.      TrustServerCertificate: When used with encrypt, enables encryption using a self-signed server certificate.

7.       Validation and Deployment

a.       Hit Next to Validate

i.      If Validation on Configuration of the linked Server fails:

1.       In SSMS Browse to Server Objects > Linked Servers > Providers > SSISOLEDB

a.       Right Click, hit properties,*** Enable Allow Inprocess***

 b.       When Validation is complete click Publish

 8.       Test/Query the SSIS View

a.       From the Database we specified in our data feed, right click the view and select top 1000 rows to verify our feed is working properly. 

b.

b.

c.

c.

9.     Use the Published View as a Source in PowerBI

 

a.       Click Get Data

 

b.       Choose SQL Server

10.       Below you can see the VIEW displayed under the Database in which we deployed to.

10.       Below you can see the VIEW displayed under the Database in which we deployed to.

11.       Click LOAD 12. I then went ahead and created some simple PowerBI Dashboards off the View source. Everytime this report is refreshed or executed that in turn executes the SSIS Package

11.       Click LOAD

12. I then went ahead and created some simple PowerBI Dashboards off the View source. Everytime this report is refreshed or executed that in turn executes the SSIS Package

13.

13.

 

14.       Verify the SSISDB Metadata to see that the SSIS Package executed

 

a.       Use the following SQL to query the SSISDB Catalog meta-data.

 

  USE SSISDB;

  SELECT    es.execution_id ,

            folder_name ,

            project_name ,

            package_name ,

            reference_id ,

            reference_type ,

            environment_folder_name ,

            environment_name ,

            es.start_time ,

            es.end_time ,

            es.execution_duration

  FROM      internal.executions

            INNER JOIN internal.executable_statistics es ON es.execution_id = executions.execution_id;

This query will display your SSIS Folder, Project, Package, Environment(if applicable), and some execution stats on that item.

We can see some previous executions when I was testing the initial build of the report.

  15.       Go back to the PowerBI report and hit Refresh     16.       Execute this SSISDB Query to see the newly created execution:

 

15.       Go back to the PowerBI report and hit Refresh

 

 

16.       Execute this SSISDB Query to see the newly created execution:

 

 

 

TALAVANT'S MEET OUR CONSULTANTS SERIES FEATURING: DAVE DUVARNEY

Meet Talavant's PRESIDENT, Dave DuVarney.

Dave has spent spent most of his career in consulting. He quotes, “I really love it.” He actually started in Accounting, but quickly realized he had a passion for software development. He was then offered a job at a small niche BI firm in Seattle. The original firm he worked for, was eventually acquired by a larger firm. With the larger organization, he was exposed to various clients big and small. He also had the opportunity to travel the world and help start an office in London. All of which he says, were fantastic experiences.

How Talavant came about… was, he had always wanted to start his own consulting company. Dave was independent for a while and meeting Rob Long set Talavant into motion. Both Dave and Rob have great complimentary skills and decided it was time to build a great practice around BI.

Some of Dave’s main technical skills include, that he is a developer at heart. He likes to create things, and most of that time has been spent with Microsoft tools. Anything development related around SQL Server and Microsoft BI tools (SSIS, SSAS, SSRS, Power BI). He also really enjoys writing MDX. Lately, Dave has also started working more with Azure tools including Azure Data Factory, Azure Data Catalog, Azure SQL, Polybase and Web Apps.

In regards to Dave’s expertise, it is simple… years of consulting! He has worked with both small and large companies and helped managed both small and large projects. Due to the robust amount of experience, he really knows what a company needs to do in order to have their Business Intelligence programs succeed.

In Dave’s free time, when he is not running Talavant, he stays fairly busy with his two young girls. When he is not at their activities, he really enjoys projects that involve something where he can hold it in his hands. Working with software all day, he mentions that he misses the more tactile building. Lately woodworking has been a hobby of his, and he is currently building a table top for his office conference table.

Document Maps in SSRS

A document map is an interactive feature of an SSRS report that can generate a hierarchy of links that you can use to navigate the report. The document map will take on different forms, depending on the render format.

If you have any long paginated SSRS reports, consider using document maps. They've been around since SSRS 2005, but I think they're still underutilized, considering how many reports get exported to Excel and PDF

BimL – Intro(Truncate and Load)

This will be an ongoing tutorial working with T-SQL, SSIS, BimL, and C#. The primary purpose of this series will be to first introduce you to some of the formal concepts of Biml such as configuring Packages, Connections, and Tasks. In future blogs we will learn how to create reusable Biml files for SSIS Design Patterns.

Tools you need:

  1. Visual Studio
  2. BIDS Helper BIML Express– At the current time BIDS Helper is not working for Visual Studio 2015, I am using Biml Express which is an add-on to Visual Studio which can be found here BIML Express.
  3. Adventureworks 2016 Truncate and Load – Biml

First lets create 2 destination tables in the AdventureworksDW2016 database: Dest_Address and Dest_Person. *No Data warehouse concepts here yet, we just want to get a clear picture of how Biml is working.

CREATE TABLE [dbo].[Dest_Person](

[ID] [INT] IDENTITY(1,1) NOT NULL,

[FirstName] VARCHAR NULL,

[LastName] VARCHAR NULL,

[MiddleInitial] VARCHAR NULL,

CONSTRAINT [PK_Person] PRIMARY KEY CLUSTERED

(

[ID] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

) ON [PRIMARY]

GO

CREATE TABLE [dbo].[Dest_Address](

[ID] [INT] IDENTITY(1,1) NOT NULL,

[AddressLine1] VARCHAR NULL,

[AddressLine2] VARCHAR NULL,

[City] VARCHAR NULL,

[State_Abbreviation] VARCHAR NULL,

[Zip] VARCHAR NULL,

CONSTRAINT [PK_Dest_Address] PRIMARY KEY CLUSTERED

(

[ID] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

) ON [PRIMARY]

GO

*There will be scenarios where it is more applicable(but not always) to Truncate and Load a destination table versus applying Delta changes (Inserts, Updates, Deletes). Lets create our first Biml file which later we can generate our SSIS package from. We do so by right clicking on our Project and choosing “Add New Biml File”. It is then created under the Miscellaneous folder as BimlScript.biml

1.) Connections You start by typing a < which is the opening of a segment of code and opens Biml’s Intellisense. Every segment must begin and end with < and >.

1.) Connections

You start by typing a < which is the opening of a segment of code and opens Biml’s Intellisense. Every segment must begin and end with < and >.

  Choose Connections and then Connection – where we will define our source and destination. Here we have defined two connections. We gave them a Name and a ConnectionString

 

Choose Connections and then Connection – where we will define our source and destination.

Here we have defined two connections. We gave them a Name and a ConnectionString

2.) Packages

2.) Packages

From here we will be able to generate an actual package. We are giving the package a name and defining “Parallel” as our ConstraintMode. This of course means Tasks inside our package will be ran in Parallel instead of Linear. We will dive into that at a later time. Go ahead and Generate the package by right clicking the Biml file and hitting Generate SSIS Packages. Notice we will not see any connections, that’sbecause we need to associate a connections with this package, lets do that! Since we are still in the Control Flow all we need is the Dest connection because this is where our Truncate SQL Statement will be. Once we get into the Dataflow that’s where we will have our Source and Dest. Also we do not need to redefine the whole connection string either, just associate it by typing: <Connection ConnectionName=”Dest”></Connection>

From here we will be able to generate an actual package. We are giving the package a name and defining “Parallel” as our ConstraintMode. This of course means Tasks inside our package will be ran in Parallel instead of Linear. We will dive into that at a later time.

Go ahead and Generate the package by right clicking the Biml file and hitting Generate SSIS Packages. Notice we will not see any connections, that’sbecause we need to associate a connections with this package, lets do that!

Since we are still in the Control Flow all we need is the Dest connection because this is where our Truncate SQL Statement will be. Once we get into the Dataflow that’s where we will have our Source and Dest. Also we do not need to redefine the whole connection string either, just associate it by typing:

<Connection ConnectionName=”Dest”></Connection>

Go ahead and re-generate the SSIS package to see you now have a Dest connection on your Controlflow.

Go ahead and re-generate the SSIS package to see you now have a Dest connection on your Controlflow.

One last thing before we move on. We need to define the ProtectionLevel of the package here as well:

*Remember – our Project and Package ProtectionLevels must match. Next lets add a Script Task where we will Truncate our Destination then we will add our Source and Dest Dataflow to wrap up this Introduction to Biml. 3.) Packages > Tasks, Variables, and Parameters

*Remember – our Project and Package ProtectionLevels must match.

Next lets add a Script Task where we will Truncate our Destination then we will add our Source and Dest Dataflow to wrap up this Introduction to Biml.

3.) Packages > Tasks, Variables, and Parameters

To Create a Variable: <Variables>      <Variable Name="VariableName"DataType="DataTypeHere">TheValue</Variable>  </Variables> To Create Tasks: <Tasks>   <ExecuteSQL Name = "" ConnectionName="" ResultSet="None">      <VariableInput VariableName="User.Variable"></VariableInput>   </ExecuteSQL>                        What it looks like in SSIS:

To Create a Variable:

<Variables>

     <Variable Name="VariableName"DataType="DataTypeHere">TheValue</Variable>

 </Variables>

To Create Tasks:

<Tasks>

  <ExecuteSQL Name = "" ConnectionName="" ResultSet="None">

     <VariableInput VariableName="User.Variable"></VariableInput>

  </ExecuteSQL>

                       What it looks like in SSIS:

For our Dataflow, it will only create an Empty Dataflow Task, next we will configure it. To Configure DataFlow: (Ours will include a Source, RowCount, and Destination) *Go ahead and change your ConstraintMode to “Linear” as we want to Truncate first then Load. *Add another variable that will hold our RowCount Data

For our Dataflow, it will only create an Empty Dataflow Task, next we will configure it.

To Configure DataFlow: (Ours will include a Source, RowCount, and Destination)

*Go ahead and change your ConstraintMode to “Linear” as we want to Truncate first then Load.

*Add another variable that will hold our RowCount Data

DataFlow Source Input Options: <VariableInput VariableName=”” <ExternalTableInput Table=”” <TableInput TableName=”” <TableFromVariableInput VariableName=”” OleDbSource Provider a name, which connection to use, and your Source Query: <OleDbSource Name=”PersonSourceTable” ConnectionName=”Source”> <DirectInput>  SELECTCAST(FirstName AS VARCHAR(50)) AS FirstName , CAST(LastName AS VARCHAR(50)) AS LastName , LEFT(CAST(MiddleName AS VARCHAR(1)), 1) AS MiddleInitial FROM    Person.Person; </DirectInput>     </OleDbSource> RowCount Provider a name and a Variable for your RowCount: <RowCount Name=”PersonExtractCount”VariableName=”User.PersonExtractRowCount”></RowCount> OleDbDestination   Provide a Name, a Connection, and point to a table using ExternalTableOutput <OleDbDestination Name=”Dest_Person” ConnectionName=”Dest”> < ExternalTableOutput Table=”dbo.dest_person”></ExternalTableOutput> < /OleDbDestination>     We have three segments in this Dataflow(OleDbSource, RowCount, and OleDbDestination). Which resembles the following in SSIS:

DataFlow Source Input Options:

<VariableInput VariableName=””

<ExternalTableInput Table=””

<TableInput TableName=””

<TableFromVariableInput VariableName=””

OleDbSource

Provider a name, which connection to use, and your Source Query:

<OleDbSource Name=”PersonSourceTable” ConnectionName=”Source”>

<DirectInput>  SELECTCAST(FirstName AS VARCHAR(50)) AS FirstName ,
CAST(LastName AS VARCHAR(50)) AS LastName ,
LEFT(CAST(MiddleName AS VARCHAR(1)), 1) AS MiddleInitial
FROM    Person.Person;
</DirectInput> 

   </OleDbSource>

RowCount

Provider a name and a Variable for your RowCount:

<RowCount Name=”PersonExtractCount”VariableName=”User.PersonExtractRowCount”></RowCount>

OleDbDestination

 

Provide a Name, a Connection, and point to a table using ExternalTableOutput

<OleDbDestination Name=”Dest_Person” ConnectionName=”Dest”>
< ExternalTableOutput Table=”dbo.dest_person”></ExternalTableOutput>
< /OleDbDestination>

 

 

We have three segments in this Dataflow(OleDbSource, RowCount, and OleDbDestination). Which resembles the following in SSIS:

Re-Generate your SSIS package and go ahead and Execute.

Re-Generate your SSIS package and go ahead and Execute.

  Final Biml File: Soon we will be discussing Automation using Biml, but even here you can see how reusable Biml can be. If we need to create another Truncate and Load SSIS package in the future, all we would need to do is change the Sources and Destinations(Connections/Tables).

 

Final Biml File:

Soon we will be discussing Automation using Biml, but even here you can see how reusable Biml can be. If we need to create another Truncate and Load SSIS package in the future, all we would need to do is change the Sources and Destinations(Connections/Tables).

TALAVANT'S MEET OUR CONSULTANTS SERIES FEATURING: SEAN FORGATCH

Meet one of Talavant's Consultant's, Sean Forgatch.

Prior to starting a career at Talavant, Sean has worked as an ETL Engineer and Consultant in the Healthcare and Insurance industries. While he worked there, he provided primarily Microsoft Business Intelligence development. He has also implemented and designed full scale ETL frameworks and architecture, as well as, analyzed and integrated various types of software solutions from Healthcare EMR’s to ERP’s.

Some of the main technical expertise that Sean brings to the Talavant team, includes: Data Integration, Reporting, and analysis. He has a great deal of expertise, analyzing business requirements throughout various departments. Some of the areas in which he worked include, analyzing Healthcare data, developing custom BI solutions around that data, and successfully achieving CMS measures, such as Meaningful Use stages and PQRS without modifying the EMR application workflows.

When asked, what a client would say about his past performance, this is some of the comments he has heard:

“Sean was able to identify and implement new technologies that were cost effective and enabled our organization to have a larger more instantaneous insight into our health plan data trends across the United States and UK. What used to take weeks to analyze can now be presented to us at the touch of a button.”

In Sean’s free time, he enjoys being active, and doing things such as running, playing tennis and riding his mountain bike. He also likes to volunteer and go boating… when the Wisconsin weather allows him to!

Foundational Data Warehouse – Testing Phase

Continuing the blog series, Foundational Data Warehouse, we will be taking a closer look into the next foundational building block which is the Testing phase. The primary objective of this phase is to ensure the product meets the Business Requirements as defined during the Analysis Phase. You should have completed your Unit testing during the Design phase. This includes field level testing and the program level testing (insert, update, deletes., error processing). Once the basics are covered you are ready to begin the Testing phase.

To enter the Testing phase coding should be completed and the users interface frozen. Defect tracking is a crucial activity that helps track problems as well as determine the quality of the product. The status and ownership of all defects should be tracked. Frequent meetings to discuss the defects amongst the team members will be key to the success of this phase. The following is a chronological list that define the order in which the testing should be done. Don’t get to hung up on the distinctions between the different type of test. You really need to communicate amongst the team what the focus of the test should be and the intent and end result of each of the different types of test. Again, defect tracking and frequent meetings are key to success in this phase.

Integration Testing – This type of testing generally comes after Unit testing. Integration testing focuses on validating that multiple subcomponents of the system work correctly together (integrate together). In many cases there will be multiple developers working on different subcomponents within a project the main object is that the integrated components are functioning properly and the as a group.

Regression Testing – Tests the Foundation to ensure recently introduced changes have not had a negative impact on the existing systems. The focus of Regression testing is to have an automated bucket of test-cases that encompass the entire foundation to ensure you did not impact existing programs or data flow.

System Test – Focuses on the testing that simulate an end-to-end scenario of how the customer will use the system being built. This is where you want to focus on specific business scenarios to ensure the behavior of the application meets the expected requirements. This is my favorite type of testing, I like to think of it as trying to break the system, you should be focusing on both functional and non-functional requirements. You need to build detailed test cases that encompass any functionally that could have been missed.

Performance Test – This type of testing focuses on the system robustness by stressing the system with high volumes of data. It also focuses on speed and stability, helping to uncover bottlenecks and establish a baseline for future batch patterns before moving to production. If you’re lucky your company will have a separate database that simulates production that can be used for performance testing.

User Acceptance Test (UAT) – This type of testing is generally completed by the end-users to help ensure the requirements are being fulfilled. UAT should be performed with production like data (hopefully in performance environment) and focus on end-users performing scenarios that will be required during their normal daily usage. This will help ensure the application performance meets end-users needs. The goal is that by the end of UAT a representative of the end-users will formally accept the application and sign off on approval.

When you come out of the Testing phase you should have little to no known bugs and feel confident that the product meets all the end-user requirements, doesn’t impact existing systems, and the performs is acceptably.

Deliverables:

Integration Testing Complete: The BI/ETL Developers generally creates these document. This is the start of incorporating the dependencies of the newly design system and validating that the data flows correctly once you have integrated them together as a whole.

Regression Testing Complete: The Solution Lead or someone who has a good understanding of the current batch processing will create the regression testing documentation. Try to make these reusable over the releases so that you will only have to add the respective modifications to the regression testing. By the end of this phase you should be able to provide a diagram of the data movement strategy and how the new/modified objects fit into the current batch process. Also, you should have automated test cases setup that continue to monitor the entire batch process and ensures that the newly added programs didn’t impact any existing objects.

System Test Cases Complete: This document is usually created by the Business Analyst or Solution Lead that are familiar with the business processes. In many cases you may have a script that runs daily to ensure the expected results are continually being met and will send out alerts of any test case failures.

Performance Testing Complete: The Solution Lead generally creates these document focusing on scalability and stress testing the systems. Be sure to document any indexes that need to be added to help with performance to help ensure that any modifications don’t impact other systems. This should also include documentation that help monitor batch cycle, current processes, etc.

User Acceptance Test Plan Complete: This document is usually created as a partnership between the end-user and the Business Analyst or some other Lead that has a good understanding of the agreed upon requirements. This is generally the businesses first look at the product you will be producing and need to make sure to have the business test thoroughly enough that they feel confident in the product they will be receiving. Test cases tend to be non-technical. By the end of UAT you should have the end-users sign off of the finished product that will be deployed to production.

In conclusion, the testing phase is a very important part of the project lifecycle. There should be a lot of communication taking place amongst the team members. By the end of this phase the end-user should have a clear vision of what is being delivered and feel confident in the end result. You should be ready for code freeze and feel confident moving into the next phase leading into my next blog, Implementation and Warranty.

TALAVANT'S MEET OUR CONSULTANTS SERIES FEATURING: GILL ROWLEY

Meet one of Talavant's Senior Consultant's, Gill Rowley.

Gill has been around data for the majority of his career. Working as a DBA for quite some time before transitioning to BI, he knows the ins and outs of a database. He originally started his career as a DBA and Websphere Administrator for a large insurance company in Madison. Gill was responsible for installing configuring, maintaining the SQL Server database systems, as well as maintaining the corporate web platforms and sites. From there, he transitioned into a different Consulting company, prior to Talavant, where he delivered Business Intelligence Solutions within the Poultry industry.

Some of the main technical expertise that Gill brings to the Talavant team is his several years of experience in all aspects of Database Administration. He can assist with virtually anything, from installation and configuration to performance tuning to high availability/disaster recovery. In addition, he has quite a bit of experience with SQL Server Integration Services and has worked with different data from all types of industries.

When asked, what a client would say about his past performance, this is some of the comments he has heard:
“People have said I have developed solutions that make their heads spin. I’ve had clients notify my superiors of my outstanding work on troubleshooting long running processes, in one case taking a 20-minute process and reducing it to 10 seconds. I get a quick understanding of the problems presented to me, and work diligently with the client to find the best solution.”

In Gill’s free time, when he is not solving data problems he enjoys, hunting, fishing, boating, and riding his motorcycle. He also enjoys brewing his beer!

Talavant's Meet our Consultants Series featuring: Jess Rabida

Meet one of Talavant's Architect's, Jess Rabida.

Jess has worked as an Architect Lead in BI for quite some time, and has been responsible for overseeing the strategic direction, of the enterprise data warehouse (EDW). She has been accountable for the overall EDW foundation and the entire process from analysis, design, and build through implementation. Her responsibilities included, the design of the dimensional model, leading the analysis, the data movement strategy, overall design, development, and front end dashboards of the EDW to support the reporting needs of the business.

Some of the main skills she brings to the Talavant team is having such strong data warehousing experience. She truly understands the concept of the data warehouse lifecycle, and is able to be put into any environment and understand the concept of what the client expects the end result to be. In addtion, Jess is also strong when it comes to analysis and testing.

When asked, what a client would say about her past performance, this is some of the comments she has heard:

"I have been so impressed with how quickly you picked up the meaningful use quality measure work. You also built very strong relationships and have been great to work with. Your combination of exceptional technical skills along with being able to connect well with diverse personalities is very rare".

Finally, when Jess is not building Data Warehouses, she enjoys spending time with her family, camping, working out and reading.

Handling Oracle CLOBs in Biml

As I've written previously, you can use the Attunity Oracle connector in SSIS packages generated with Biml. If you're creating these packages by hand, the experience is seamless. With Biml, you can employ the **OracleSource** element, which behaves like the **OleDbSource** element, in that it handles a lot of the column specifications for you. 

However, if you're using Biml without the benefit of the **OracleSource** element (e.g. if you can't use Mist, or if you need to employ the **CustomComponent** syntax), then you might run into data type conversion issues, as you need to specify all source columns and their data types **explicitly**, and implicit conversions in SSIS source components generally result in errors.