Pages

2017-01-25

The data guy learns German

Many years ago I worked for a company headquartered in Germany.

I was able to go to Darmstadt for a short visit. I did learn a few German phrases, but I did not get around to learning much of the language.

I have always wanted to get around to learning how to read and write in  German, but I kept putting it off.

This year I have decided I am going to focus my personal journey on linguistics, and language processing. Learning another language may or may not help me as I get back into the text processing space, but from a meta perspective of understanding more about the process, we as humans have to go through in order to understand other languages will hopefully give me more insight into providing value to text analysis. Not to mention, most of the linguists I have known speak more than just one language.

I think focusing on learning another language for human conversation may help me as I focus more on language processing.

So I began my journey earlier this year.

The following are the resources that I have been using.

https://www.youtube.com/user/DeutschFuerEuch
https://www.youtube.com/channel/UCTobWZV_HWGSoaRrhHyrJ-A
And a few other youtube folks.
memrise.com
The concepts taught in the book Fluent Forever 
       I encourage anyone interested in language learning to read this book. The main concepts in his technique are the following:
A transcription of "IPA" in the IPA
  • Learn pronunciation using the International Phonetic Alphabet
  •  Use flashcards that you make yourself. 
  • Focus on word frequency lists for learning vocabulary. The word Ich is more common than the word Pilz. You may be able to work out more if you know and understand a word that is used more frequently. 
  • Do not translate words from German to English. Draw pictures of the artifacts or concepts. For example, if you do a flashcard for the Sonne draw a picture like the one on the right






I do want to focus on this last point for a moment. I have not so far spent as much time as I would like putting together flashcards the way Gabriel Wyner suggests in the book above. However, what I do is when I take note of a new word I am learning, I only write the word in German, followed by a picture that means something to me that represents the item, artifact or concept.  I do not know if this method has the same effect as what he suggests, but it is what I am attempting. Ultimately I will probably create the full flashcards as he suggests with the word in Deutsch, the IPA pronunciation, and an image that represents the word.



I added one other thing to the list. "read german books". For any L1 language, we naturally increase our vocabulary as we read. I have started with some young adult readers and will work my way through more of them. I am already at the point where I can mostly understand the German text while looking up a few words. This expertise will continue to grow this year.

 I also intend to watch some television shows dubbed into German. Wikipedia also has some really good technical material that can be reviewed in both English and German. I plan to use that resource as much as possible to clarify the technical terms I work with daily.

Google translate is also very helpful because I may think I have a translation worked out but Google confirms the parts I had correct and shows me where I went wrong. 

So here goes my first bilingual post.

Mein Verständnis von Deutsch ist erst am Anfang. Ich hoffe, bald mehr Menschen auf Deutsch zu sprechen. Wenn Sie Deutsch sprechen, schreiben Sie mir bitte eine Nachricht.

Prost!

Doug
 






2017-01-05

What is the performance relationship between a Database and a Business Intelligence Server



This article will be a bit long; it covers a complicated topic that I have been studying for quite some time. 

I have run across the need to explain this topic on a number of occasions, and over time my explanations have hopefully become clearer and more succinct.

The concept that will be discussed here is the performance relationship between a database server and a business intelligence server in a simple data mart deployment. 

Rarely are data mart deployments simple, but the intention for this is to be a reference article to understand the relationships between the server needs and the performance footprint under some of the various scenarios to be experienced during the lifecycle of a production deployment. 

Here is a simple layout of an architecture for a DataMart

Very basic image, the D is the database server, the F is the front-end, the U are the users, and they are all connected via the network.


To be precise this architecture represents a ROLAP (Relational On Line Analytical processing) built on top of a dimensional model (star schema) implementation. The dimensional model is assumed to be populated and current with totally separate ETL processes that are not represented in this diagram.

The “D” represents any database server: Oracle, MySQL, SQL Server, DB2, whichever infrastructure you or your enterprise has chosen.

The “F” represents any front-end business intelligence server that is optimized for dimensional model querying, and is supported by a server instance: Business Objects, Cognos, Pentaho Business Analytics,
Tableau Server. The desktop specific BI solutions do not fit in this reference model for reasons we shall see shortly. 

In my early thoughts on the subject, I envisioned that the performance relationship in a properly done DataMart would be something like this: 







This is a good representation of what happens. 

On the left side of the chart we have the following scenario.

When the front-end server responding to a user interaction sends a request back to the database for aggregated data like: “show me the number of units sold over the last few years”

One could imagine the query being something like: Select Year, Sum(total_sold) from Fct_orders fo inner join Dim_Date dd on fo.date_key = dd.date_key.
 
The dutiful database does an aggregation, provided all of the statistics are current on the data, a short read takes place and more CPU and Memory but less Disk I/O is used to do the calculation.

In the graph this is represented by the high red-line on the upper left.

The results returned to the front end are small. A single record per year of collected data.
The CPU and Memory load on the front end server is tiny shown in green on the lower left.


On the right side of the chart we have the following scenario.

When the front-end server responding to a user interaction sends a request back to the database for non-aggregated data like: “show me all of the individual transactions taking place over the last few years”

One could imagine the query in this case to be something like: Select fo.*,dimensional data from Fct_orders fo inner join (all connected dimension tables).

In this case the database server has little option but to do a full table scan of the raw data and returning it.

In the graph this is represented by the lower red-line on the right (more disk I/O, less CPU and Memory), then the data is returned to the business intelligence server.

Our Front-End server will have to do some disk caching, as well as lots of processing (CPU and Memory) to handle the load just given to it, not to mention things like pagination, and possibly holding record counters to keep track of which rows the user has seen or not, among other things.

This graph seems to summarize the relationship between the two servers rather nicely. However, something is missing.

I had to dwell on this image for some time before I was able to think of a way to visualize the thing that is missing.

The network.

And even then there are at least two parts to the network.

The connection between the front-end server and the database, followed by the connection between the front-end server and all of the various users.

Each of these have a different performance footprint.

Representing the database performance, Front-End performance, and network performance for both the users and the system connections is something with which I continue to struggle.

Here is the image I have recently arrived at:






This chart needs a little context to understand the relationships between the 4 quadrants. 

Quadrant I is the server network bandwidth. In a typical linear relationship as the data size increases from the database to the front end the server network bandwidth increases.

Quadrant II is the database performance relationship between CPU/Memory and Disk I/O for a varying query workload. For highly aggregated queries the CPU and Memory usage increases, and the Server Network bandwidth is smaller because less data is being put on the wire.  For less aggregated data, and more full data transfers the Disk I/O is higher, Memory is lower, and back in Quadrant I the Server Network Bandwidth is higher.

Quadrant III is the Front-End server performance comparing CPU/Memory and Disk I/O when dealing with a varying volume of data. As the data increases from the database more resources and caching is needed on this server.

Quadrant IV is the User Network Bandwidth this is the result of the front end server responding to the requests from the user. As the number of users increase the volume of data increases and more of a load is put on the front end server. Likewise, the bandwidth increases as more data is being provided to the various users.

This image is an attempt to show the interactions between these 4 components.

The things that make this image possible is a well-designed dimensional model, a rich semantic layer with appropriate business definitions, and common queries that tend to be repeated.

This architecture can support exploratory analysis, however, the data to be explored must be defined and loaded up front. For exploratory analysis to determine which data points need to be included in the data mart, that should be done in a separate environment.

I created all three of these images with R using iGraph and ggplot2 with anecdotal data. The data shown in this chart is not sampled, but is meant as a representation of how these four systems interact.  Having experience monitoring many platforms supporting this architecture, I know for a fact that no production systems will actually show these rises and falls the way this representative chart is doing.

However, understanding that at their core they should interact this way should give a pointer to where a performance issue may be hiding in your architecture. If you are experiencing problems. The other use-case of this image is an estimation tool for designing new solutions.

All that being said, much of this architecture may be called in to question by new tools.

Some newer systems, Hadoop, Snowflake, RedShift actually change the performance dynamics of the database component.

The Cloud concept has an impact on the System Bandwidth component. If you have everything in the cloud, then in theory the bandwidth between the database server and the front-end server should be managed by your cloud provider. There may need to be VPC pairings if you set them up in separate regions.

If these are being run within a self-managed data center should the connection between the database server and the front-end server be on a separate VLAN, or switch? Perhaps.
Does the front-end server use separate connections for the database querying interface and the user facing interface? Should it?

Do you need more than one front-end server sitting behind a load-balancer? How many users can one of your front-end servers’ support? What are the recommended limits from the vendor? Should data partitioning and dedicated servers per business unit be done to optimize performance for smaller data? 

These are all types of questions that arise when looking at the bigger picture. Specifically when you are doing data systems design and architecture. This requires a slightly different touch than application systems design and architecture.

Thinking about applying this diagram in your own enterprise will hopefully give insight into your own environment.

Can you think of a better way to diagram this relationship? Let me know.
The code and text are posted here.