Skip to content

SAP S/4 HANA at a Glance for Chief Information Officers

The SAP S/4 HANA, will open the possibilities to transform in a very positive way your current approach in how you are serving your company.

Be prepared to be changed from your old title of Chief Information Officer to Chief Business Transformation Officer (CBTO).  It is not you changing but rather the way that you will provide real value to your Corporation, adopting the new technology trends, shifting functions that do not belong to your core business value out of your Company, and concentrating your efforts in truly understanding the real Business opportunities to create value through a fast, reliable and efficient business transformation.

What to approach and what to look in your initial planning to adopt or upgrade or migrate your current SAP ECC systems to the new SAP Business Suite on HANA system.

Installation –   Commonly is completed via an Upgrade and DB Migration.  Technical

Functional   – Even if the Project is approached as a pure Technical Migration.

Activation of Transactions available on HANA

Analysis of specific Transactions that can be activated to work in HANA.

Analysis of ABAP programs that need to be enhance and/or enable to run efficiently in SAP HANA.

There are some specific tools to analyze the Programs,  to be changed, re-programmed, unit tested etc.

Functional Transformation – For those customers adapting the new Business Processes that come with SAP ECC 6.0 EHP 7.

Those business processes require an entire re-engineering analysis to transform

Blueprint – Design – Realize – Implement

The business process transformation is heavily impacted by the possibilities to re-create the User Experience SAP FIORI.

Analysis of current business process (re-engineering)

Analysis of Business Process Flows

Analysis and enablement of new processes that combine one or several transactions in a single one.

Design , truly from the Art point of view the new look and fill of the system.  (open development via SAP Fiori)

The major functional change so far , is around SAP FI, where it is no longer required the use of two Financial objects

CO – Documents and the PA documents are now in a single document.  SAP Smart Finance concept)

The major benefit for you adapting this new technology.

It is not in the speed.  OLTP transactions are not improved by better performance but rather by a better Business Processes based on smart software due to the advantages of SAP HANA as a platform.

Significant improvement in OLAP processes where now truly the SAP Business Suite on HANA can leverage the User BI experience at all levels, from operation, to higher Strategic Business Intelligence.

Integration + Integration + Integration – Finally you will be able to reduce your diverse Vendors footprint that your were collecting in the past 10 or 15 years. Don’t forget the rule of thumb.  It will cost you 20% to 30% of your implementation and further maintenance costs to maintain connectivity, like for example Process Integration Software, at the same time SAP HANA new and flexible Smart Data Access offers you new possibilities to store Big data using smart tools like Hadoop, federating Data from several other systems and allowing you for a better and safer Business too Business interaction.

The fact that SAP will no longer support SAP ECC on traditional relational database management systems by 2020.

Business Intelligence crowd sourcing agile Methodology for SAP HANA

The traditional Business Intelligence systems are implemented in public and private companies in the world, following a traditional waterfall methodology.

This methodology starts with a phase to Initiate the Project, continues with a Vision and Strategy definition Phase, to then Design and Build the required application.

The most recent variance of this methodology is the Agile methodology for Business Intelligence, that open the possibilities to develop applications based more in real end-user needs discovered in sprints that the Design and Build phases repeat according to the end-user needs.

End users and organizations complaint about this process, because it is long, and do not totally match the information needs required by the Business decision takers, and rely always in the interpretation of functional business analysts that translate the requirements back to the technical teams. It is similar to drive your car, telling someone else what to do and where to go.

The rapid business changing environments force the business decision takers to have a faster,flexible and adaptive business intelligence methodology that provide the design and build power to the real business users without relying in third parties that to understand, and design their required solutions.

The advance in database technologies,the faster integration of In-Memory platforms,and flexible integration to Data presentation Layers platforms, help to bring closer the End-User to a “Self-Service” Business Intelligence reporting approach. The Self-Service approach will potentially help end-users and key decision makers in their abilities to explode data more freely and without dependencies on third parties.

The issue is still not solved.

The implementation of a new BI-Agile-Crowd-sourcing Methodology is the solution to the fundamental issue. Combining the technologies and flexibility to design and build solutions directly using productive data, replicated in real time in development environments, and the ability to provide access to all end-users to design, and build even at the proof of concept level, will bring the total freedom and required knowledge by end-users at all levels of the organization to create in a very fast and open platform the solutions required. The designs, even can be tested with real situations, and when the end-user community in charge of the development approve the proof of concept, then the technical areas of the Information technology teams will be able to refine, and provide the real productive solution. the transition from a proof of concept to a productive application will be very short, and the end-user will have a clear vision of requirements and final solution just ready to be build.

New business requirements to produce complex applications, will not need to wait for the approval of the IT areas, it will be that the End-users in need, will launch their own Proof of concept initiatives, and will just wait for the turn to be build.

End-users designs will be a very cooperative, open and well tested process,created by all those who have a real need. This open, collaborative, and self drive process will avoid the painful, slow and dependent process to create results using someone else to design based on the input of a few. By Guillermo B. Vazquez

SAP HANA Analysis of Flight 214 Asiana Airlines

Background

After the terrible accident of Flight -214 of Asiana Airlines, this past Saturday July-06-2013,  I got very curious in trying to understand from a Management point of view the possible causes of this incident.

As a frequent traveler myself, and knowing that many of my colleagues are also frequent travelers, I thought it will be a good idea to use the Power of SAP HANA to try to define possible causes of this Aerial accident.

In order to make this exercise a useful exercise, we will define some characteristics of who will be conducting the analysis, and what data will be relevant to find out possible causes of the aerial accident.

Taking into consideration that this will be a managerial analysis, we will consider that management are not aviation experts, and that they will just collect data to make educated questions to their real experts in their fields. Common management scenario in real life.

By other hand, we will define that management will not have access to the data that is under investigation by the federal authorities, so management should be able to balance this data constraint.

Metadata definitions

The ideal raw data scenario will be to get the data that will describe the behavior of the airplane minute by minute from the time of take off to the time of landing, and also it will be ideal to track similar flights from the same airline, airplane Model  Boeing 777, and same origin and same destiny.

Having that data sources, and analyzing such amount of data will give to Management a better view than the data collected by the federal investigation bureau, due to the fact that management will be able to compare exactly many airplanes that flew exactly under similar circumstances to be able to find patterns, and find possible solutions.

The suggested metadata is the following

Flight Date    ##/##/####

Time    ## : ##

Altitude in Feet : ##,###.##

Speed in MPG :  ###.##

Speed in KTS : ###.##

Range : ####,##   (Feets per minutes at Descent or Ascent)

Longitude of the Airplane Location  : ##.####

Latitude of the Airplane Location :  ##.####

Course : ## o

Direction : Text

Frequency of Data Collection

Data should be collected every minute at take off, until the unit finally land.

* The estimated number of records per flight will be  900 Records per flight, each record with 11 Columns for a total of  9,900 attributes.

* The total number of flights to analyse will be 20, to collect a total number of  200,000 attributes

The number of combinations to get the following results will be around 2,000,000 combinations.

Management should be able to analyze and compare the information of  each flight by :

Location  (latitude – Longitude)

Flight – Date

Time

Altitude

Speed Miles

Speed Knots

Range

To be able to compare performance of the unsuccessful flight with other successful flights

Flight 214 Data

Data Sourcing

Flight Raw Data.

Data was collected from a  Public tracking site

http://flightaware.com/live/flight/AAR214

The website was able to track  all Flights 214 from the 25th of  June to July-07-2013.

The data available for the analysis was of a total of 525 Flights,  350 flights for 2012, and 175 flights in 2013.

Each flight analysed had the same characteristics of Flight  214

Asiana Airlines

Boeign 777

Origin : Seoul Korea

Destination :  San Francisco California USA

total of 472,500 Records

total of 5 Million attributes

The raw data was loaded into  SAP HANA.

With this approach management then will have a good base to formulate a strong analysis, without depending on external sources.

Analysis

  1. On Friday July-05-2013 a day before the accident , Flight 214, Boeing 777 experienced an unusual approach of landing at  151 MPH, less than the speed recommended. the rate of descending was -660 Feet per minute average in all airplanes Boeing 777 analyzed.
  2. On Saturday July-06 the accident airplane approached at 154 Miles per hour, at 400 Feet, below the speed recommended and with a rate of -990 feet per minute. Indication that the airplane started the descend maneuvers   before all other flights 214 analyzed.

Flight 214 Data II

3. At  14:28 pm Pacific Time 300 feet the accident airplane of Flight 214 presents a similar unusual pattern compared with other successful Flights 214’s  analyzed.

Speed of  142 Miles per hour, 15 % below all successful flights analyzed with a Rate of  descending of  -840  Feet  per

minute.  40% above the average of similar flights with speeds of less than 150 miles per hour

Flight 214 Data III

  1.    At  14:28 PM Pacific Time  at 200 Feet   the accident flight 214, has a very low speed of  98 MPH, significantly below all other Flights 214 analyzed, and their Rate is ascending at 120 Feet per minute, or ascending at 2 feet per second.    Indication that the plane engines were fully operational discarding the possibilities of a Fuel Gas pipe freeze as per the last  British Airways Boeing 777  Flight GYMMM-hard landing at Heathrow London in January 17 2008.

There is also indication then, that the Plane was forced in a last minute  ascending maneuver, bringing the airplane

nose 15 degrees up.

Flight 214 Data IV

Conclusions and Recommendations

  • SAP HANA supported the possibilities to run analyzes of  data recorded of hundreds of similar Asiana Airlines Boeing -777 equipment from Seoul to San Francisco like the accident Flight- 214, processing millions of combinations in a very short period of time.
  • SAP Lumira presentation layers, are intuitive and easy to manipulate and explode data, orientated to Senior Management with no training required.
  • Management Recommendations,  exhaust investigation to discard the 12.5 % possibility of an external agent, or external event that triggered a possible malfunction of the automatic pilot functions.
  • Open an investigation of Flight-214 of July-05-2013, that landed in San Francisco a day before the Flight-214 accident occurred on July-06-2013. Flight-214 of July-05  also presented a severe slow speed average during the final landing approach maneuvers

 

Analysis of Flight Asiana 214 July-06-2013 using a SAP HANA System (3)

SAP HANA Live – A real use case to Understand

 HANA  Live  –      Analytic Foundation.

For those clients that run their businesses using SAP Business Suites systems, and also run non SAP transactional and non SAP analytical systems, and at the same time, they  would like to have an alternative in the use of BW modeling or  in addition to current ABAP reporting, the HANA Analytic Foundations represents  a very good  option.

HANA Analytics Foundation (HAF),  provide a very extensive reporting capabilities across all business suites applications by pre-built SAP Data models.

HAF provides hundreds of HANA Data models based on the SAP standard business scenarios in which your systems runs.

These data models are the foundation for a major and scalable HANA data models and applications, that can be expanded  with structured data from your SAP systems, or non sap systems, and also unstructured data from non SAP Systems.

Your starting point to start building you analytical foundation is far from starting from scratch, as it happened at HANA beginnings.

Your data warehouse in HANA can be completed, in less time, by the savings in time and efforts of using very complex and strong pre-built and certified SAP  HANA Data Models.

The data sourced from SAP business suite systems, like SAP ECC, CRM, SCM, GTS, to SAP HANA, uses the SAP Landscape Transformation System (SLT), and data from non sap systems could be sourced into HANA using SAP Data Services.

In the Figure (1) Below a complex architecture is represented with a single SAP Business Suite System (SAP ECC), and many other non-sap systems, bring content into SAP HANA.

The Virtual Data models, are use as a foundation of more complex ad hoc  HANA data models.  Please note that SAP HAF, only support built Calculation views.

Figure (1)

Capture

Please note that the SAP  HAF provide virtual data models not only for SAP ECC, but also for SAP CRM, SCM, GTS and SAP is expanding these capabilities.

The  SAP HAF consists of the following elements ,  virtual Data Models,  and on top analytical applications with rich clients and user interfaces.

You as a customer can build new query view layers  in top of the prebuilt customer virtual data models, expanding the basic models provided by SAP.

This extensible model, allows you and your company to expand in an easy way your analytical capabilities, extracting, and using views, that are based on standard SAP business processes.

You will not need to rethink how to build this views, that most likely will be standard across Industries and sectors, releasing your organization talent, to focused in actual business analytics needs, and expanding operational possibilities in the design of mobile applications.

The most important features of SAP HANA  can be exploded using the SAP HAF,  uniformity, speed and scalability.

If you are familiar with SAP Business Scenarios, Process, steps, and transactions, you know the complexities of configuration complexities and dependencies in the ECC, CRM and other SAP suite systems , well, SAP HAF make them available, and your teams, will not need to have deep understanding of SAP models.

Lets follow an implementation example, so you can understand the benefits and use of the SAP HAF.

HAF Use Case  SAP Sales and Distribution.

For this example, lets define Corporation XYZ, a very successful Candy producer that runs their SAP ECC systems using the Consumer Products Industry Solution.

Their main distribution channel primarily is composed, by young entrepreneurs that run all their operations using complex E-commerce applications, after meeting with the Sales Vice-president of Corporation XYZ, they are demanding  a tool that can help them predict what their main distributors will order, and must importantly what is the main order composition. In top of they expressed serious concerns on regards, competitors penetrating their markets, and they would like to detect in advance which of their clients are not supplying in normal rates.

The Sales Vice-president in a normal display of sympathy offer to them a solution that will be implemented before 45 days, that is the pick season of the national candy sales, and also, in order to keep them happy, he offered that this reporting, should be made available for Mobile Devices.

The Sales Vice-President finishes his meeting, and call his CIO with the request.

In this new business environment, this request should not be a surprise for efficient IT areas, and implementation cycles should be kept in weeks, to make the business agile and efficient..

The implementation team runs their operations in a similar architecture as the one described in Figure 1.

SAP HAF, will provide to the implementation team with the following prebuilt SAP SD based HANA DATA Models, so the team consults can search in the  http:// help.sap.com  site, and find that the following HANA Data Models are available. Please note that team already has the SAP HAF virtual models installed, and their main analysis will be in finding out, how to produce additional DATA models to be able to predict order, quantities, and current clients purchasing trends.

Please review the content of the SAP ECC Sales and Distribution Virtual models available.

1.2.1           Sales   **

The following reuse views are included in this virtual data model:

  • Reuse view for sales document headers (SalesDocumentHeader)
  • Reuse view for sales quotation headers (SalesQuotationHeader)
  • Reuse view for sales order headers (SalesOrderHeader)
  • Reuse view for sales contract headers (SalesContractHeader)
  • Reuse view for headers of credit memo requests (CreditMemoRequestHeader)
  • Reuse view for headers of debit memo requests (DebitMemoRequestHeader)
  • Reuse view for customer return headers (CustomerReturnHeader)
  • Reuse view for business partners for SD document headers (SDDocHeaderBusinessPartner)
  • Reuse view for sales document items (SalesDocumentItem)
  • Reuse view for sales quotation items (SalesQuotationItem)
  • Reuse view for sales order items (SalesOrderItem)
  • Reuse view for sales contract items (SalesContractItem)
  • Reuse view for items of credit memo requests (CreditMemoRequestItem)
  • Reuse view for items of debit memo requests (DebitMemoRequestItem)
  • Reuse view for customer return items (CustomerReturnItem)
  • Reuse view for business partners for SD document items (SDDocItemBusinessPartner)
  • Reuse view for schedule lines of sales documents (SalesDocumentScheduleLine)
  • Reuse view for schedule lines of sales orders (SalesOrderScheduleLine)
  • Reuse view for schedule lines of customer returns (CustomerReturnScheduleLine)
  • Reuse view for sales document conditions (SalesDocumentCondition)
  • Reuse view for sales quotation conditions (SalesQuotationCondition)
  • Reuse view for sales order conditions (SalesOrderCondition)
  • Reuse view for sales contract conditions (SalesContractCondition)
Measures and Attributes

Some important measures and attributes are the following:

  • Reuse view for sales document headers (SalesDocumentHeader)
    • The view provides general data such as SD document category, sales organization, distribution channel, organization division, sold-to party, purchase order by customer and overall SD process status.
    • The view provides information about values such as total net amount and header counter.
  • Reuse view for sales quotation headers (SalesQuotationHeader)
    • The view provides general data such as sales organization, distribution channel, organization division, sold-to party, purchase order by customer, sales quotation validity start date and sales quotation validity end date.
    • The view provides information about values such as total net amount.
  • Reuse view for sales order headers (SalesOrderHeader)
    • The view provides general data such as sales organization, distribution channel, organization division, sold-to party, purchase order by customer, requested delivery date, overall SD process status and user ID of creator.
    • The view provides information about values such as total net amount.
  • Reuse view for sales contract headers (SalesContractHeader)
    • The view provides general data such as sales organization, distribution channel, organization division, sold-to party, sales contract validity start date, sales contract validity end date, sales contract signed date and sales contract cancellation reason.
    • The view provides information about values such as total net amount.
  • Reuse view for headers of credit memo requests (CreditMemoRequestHeader)
    • The view provides general data such as sales organization, distribution channel, organization division, sold-to party, purchase order by customer, overall SD process status, user ID of creator and creation time.
    • The view provides information about values such as total net amount.
  • Reuse view for headers of debit memo requests (DebitMemoRequestHeader)
    • The view provides general data such as sales organization, distribution channel, organization division, sold-to party, purchase order by customer, overall SD process status, user ID of creator and creation time.
    • The view provides information about values such as total net amount.
  • Reuse view for customer return headers (CustomerReturnHeader)
    • The view provides general data such as sales organization, distribution channel, organization division, sold-to party, purchase order by customer, overall SD process status, user ID of creator and creation time.
    • The view provides information about values such as total net amount.
  • Reuse view for business partners for SD document headers (SDDocHeaderBusinessPartner)
    • The view provides general data such as sold-to party, ship-to party, bill-to party, payer party, sales employee and responsible employee.
 Used Tables and Views

The views mainly select data from the following tables, for example:

  • VBAK (sales document header)
  • VBAP (sales document item)
  • KONV (conditions)
  • LIKP (delivery document header)
  • VBUK (delivery document header status)
  • LIPS (delivery document item)
  • VBUP (delivery document item status)
  • VBUV (incompletion log)
  • VBRK (billing document header)
  • VBRP (billing document item)
  • KONV (conditions)

** source from www.help.sap.com

SAP HAF provides equivalent virtual views for SAP SD Delivery processes, and SAP SD Billing processes, as you can notice the IT group of Corporation XYZ, have available all the key components of an End to End Sales Process.

From the Sales Order, Delivery and Billing processes, the SAP HAF Virtual models,  available covers the end to end implementation requirements to operate the virtual models, so after implementation the IT  group of Corporation XYZ, will have available :

SAP SLT Data Schemas built for the SAP ECC Tables mentioned above.

SAP HANA Data models with the required Calculation views, Attribute Vies and Analytical views already operational.

Basic content to be available using HTML5 basic presentation views.

So an implementation that by itself could took several weeks, to bring standard SAP ECC content into HANA, is completed, tested and operational in days.

So the Corporation XYZ IT group will focus only in the analysis and build of Virtual Client XYZ views, in top, of the existing SAP HAF views.

Corporation XYZ, hires his favorite QUAN from a prestigious University in Pennsylvania and discuss the urgent requirement.

The Statistician concludes the following based on the Business requirements, and the available data in the SAP HAF Sales and Distribution Models. He only took five business days to complete the research due to the efficiency in documentation available of SAP HAF.

The Statistical Model for Corporation XYZ.

A basic logistics regression model which predicts the probability that a sales order will be placed by a customer, and a Linear regression model to be used in the forecasting of number of items per product will constitute the foundation of the required Data model to be build in SAP HANA.

The Functional teams at Corporation XYZ, analyzed the statistical model and concluded that this new Data model is classified as medium complex, and that it will take, 3 weeks to fully implement, due to the fact that no new data will be required to be replicated in SLT, and also to the fact that the required algorithms can be creates using the SAP HANA available Statistical Language “R”.

The Model to predict the probability that a Sales order be placed,  the Reorder Model, will required the following data elements, or in more formal terms, the Reorder Model metadata requirements are:

1) Forecasted Customer new order Date

  1. Calculate the average numbers of days between past orders.

i.      Algorithm to calculate the difference between the Sales Order Date of  Sales Order 1, and the Sales Order Date of Sales Order 2…. Sales Order date of Sales Order n, for orders placed in the last 180 days.

2) Forecasted Customer Reorder quantity

a)   Calculate the average consumption rates between past orders.

i.      Algorithm to calculate the difference between the Sales Order quantities of   Client Product A for Sales Order 1, and the Sales Order quantities of Product A of Sales Order 2…. Sales Order quantity of Product A of Sales Order n, for orders placed in the last 180 days.

** Please note that HANA will provide coverage for the million of records that Company XYZ will need to process to estimate the Forecaster Customer Reorder Quantity.

3)  Estimated Product Mode

  1. Calculate how many times Product A, Product B , Product  n.. has been ordered.
  2. Take the natural Logarithm for each Estimated Product mode for each product.

Company XYZ,  uses with this variables calculated the predictive algorithms required,  and estimation order volume level algorithms using  the advantages of the statistical Language “R”.

Once that the models are operating, Company XYZ define and implement the required BOBJ 4.0 Explorer views, that can be displayed in Mobile devices, the level of effort to complete this implementation,  is not complex, as the teams are using standard BOBJ available functionality.

Company XYZ, was able to implement this complex statistical model, in the timeframe define by their business requirements, due to the fact that :

* SAP HOF Virtual models provide all required content to be able to map an end to end sales process from Sales to Billing at the Header and Items levels.

  • Teams were able to use more time, to create their own virtual models to solve a complex business problem.
  • Teams also used available BOBJ 4.0 reporting capabilities to fulfill business requirements.

In-Memory with SAP HANA on IBM ex5 systems free Book

HANA Book

I contribute with three colleagues of IBM, and we published an IBM Red Book on SAP HANA In-Memory Technology. My name is mentioned in page Xi.   It is an excellent read to understand the new technology trend, and it is free !

Published by IBM, Top Class and the best part of it is totally free for you !

Please download it from :  http://www.redbooks.ibm.com/redbooks/pdfs/sg248086.pdf

SAP HANA, tough desicion to select HANA Enterprise Edition vs. BW on HANA (1 out of 2)

Advising many of my clients, I have found that the majority of Information Technology Senior Executives that I had the privilege to advise them and work for them, in the difficult process of laying out, their SAP HANA Road Maps. The majority of the cases, we are struggling with the difficult decision making process to select what is the best tool between SAP HANA Enterprise and BW on HANA.
Confusion in the market is triggered by the internal pressure generated by their own Business organizations that fairly demand more robust, Business intelligence systems that can react and adapt faster and smarter to the dynamic markets and competition.
An important external factor that contribute to this confusion, is the fact that IT officers in the Industry, are the target of a ruthless Software market war, where the dispute by different vendors, to gain their technologies to be positioned as the dominant force in the markets, bring businesses in between this fight, making the decision process difficult.
There are only two Industries that refer to their “Customers” as “Users”, one is the Software Industry, and the other one, well I let you to think about that one, because it is not considered a legal Industry by any means.
In this article, we will concentrate in the selection between SAP HANA Enterprise and BW on HANA.
I will not discuss other technologies offered by other Software providers.
The selection between SAP HANA Enterprise Edition and BW on HANA, needs to follow a very serious and systematic analysis process. The process has seven (7) steps. We will discuss in this Blog three (3) out of the Seven (7) steps.
STEP 1 – Discover what are the different SAP HANA technology options to adopt into their current Functional and Technical Landscape.
List all your possible implementation options by:
1) Option Name
2) Option Description
The most common options available for the majority of clients are :
a) Migration of BW 7.0 to BW 7.3 on HANA
b) Implementation of HANA Enterprise Edition
c) Implementation of a HANA Hybrid Option. HANA EE + BW on HANA
STEP 2 – Define Key Business Value and Key Implementation Value measures for each HANA Option.

Key Business Value : You can define this Key Value indicators, by interviewing key business users, depending on the size of your company, the scope of your project and the time available, that define the extension of the analysis.
It is in practice, that you will define, who will get the attention, to discover the pain points in your business, to then find the value of the implementation.
As an example, you want to select from the Financial area the following key business users :
Financial areas : Chief Financial Officers, Corporate Controllers.
Example of Key business values are:
Foundational : Information to Manage the Business
Competitive : Information as a Strategic Asset
Differentiating : Information to enable Innovation
Breakaway : Information as a Strategic differentiator
Key Implementation Values :
Implementation values are defined by the Information areas, from the Functional and technical perspectives. You want to define how you will measure from the technical point of view a successful implementation.

Example:
Accelerated Development Cycles
Flexibility to model and flexibility to scale the system architecture
Complexities of Data Integration
Risk to Implement
Direction of the Install Base
Time lines, costs to implement

Step 3 : Assign values to each key performance indicator and weight each of your implementation options.
Move out of the emotions, and wonders of the technology, it is time to have a quantitative assessment of your future endeavor.
Those numbers will give you a very clear picture, of what are you and your Company are heading with the investment, will help you to have a more professional approach.

Produce a Pain / Gain Chart.

Example of a Pain-Gain analysis

Example of a Pain-Gain analysis

At this point, you can provide a clear picture, of where the HANA Road map will potentially look like.
It is recommended to have a review meeting with key business stakeholders, Key IT sponsors, to validate the findings, and get a group perspective of the findings.

In the next blog, I will guide you, through the following four (4)  steps, to describe how to desing in a high level architecture, the proposed technolohy solution, and also how to define the goals, and objectives of the SAP HANA proof of concept.

Keep tuned.

Big Data and HANA, but first lets define what is Big Data!

Big Data and HANA, but first let me clarify what is Big Data!

In the past years, the term Big Data is frequently used in all sort of discussions, mostly from the Technical, functional, and, business teams.  This term is so commonly and frequently used during discussions across conference rooms. It is also freely mentioned in advertising, used in internal and external communications.

New technologies are presented as potential saviors, of this Big Data phenomenon and other issues that mostly all serious global companies are having to face.

The majority of business Executive Management, business Managers, Senior Information technology Executives, and consultants are dealing with this so commonly used term of “Big Data”, but not really knowing what it actually means.

Definition of Big Data

The first smart thing to do is to clarify what is “Big Data”, and then discuss about the way we can solve big data related issues, by using cool new technologies like SAP HANA.

Several authors recognized that the three main characteristics of Big Data are:

1)      Volume – Data Sets with large volume of data that exceed Terabytes, Petabytes, Exabyte or even Zettabytes. **

2)      Variety – Unstructured Data and / or structured data

3)      Velocity – Batch, real time or real-real time.

    ** for those who like details–

         1 Terabyte =1024 Gigabyte

         1 Petabytes= 1024 Terabytes

         1 Exabyte = 1024 Petabytes

         1 Zettabyte = 1024 Exabytes.

The issue to resolve the issue of Big Data is to know how to enable effective decision making and process automation in a cost effective way.

Honoring my Wharton Business School, indoctrinated way of thinking; I will make emphasis in the two most important factors that an effective Big Data strategy needs to solve in my opinion.

        EFFECTIVE DECISION MAKING and   COST*EFFECTIVE DECISION MAKING

*Cost  = [ (Business Value) – (Cost to implement + Business Time& Suffering  costs to implement)]

The majority of businesses and corporations around the world are creating hundreds of thousands of gigabytes of transactional data that have turned into specific analytics of structured data.
Even though the increase of transactional volumes has been increasing, and they will continue to do, this will not be considered a Big Data issue.
The main reason, there is not variety of data, we are talking about variety in different data sources, and systems that process the transactional data generated, but at the end there is no variety of data.
For analytical issues, Software companies across the world have invested billions of dollars to solve this issue, and companies across the world have done a very good job at being on top of it.
We can use products like IBM-Cognos, SAP-Business Objects just to mention some of them that are available in the market today, and bringing the right presentation layers according to the business needs.
The issue, of Big Data appears when business corporations need to start including their clients, vendors, and competitors, thoughts, reactions, motives into the business decision models.
Today these clients, more and more are reflecting informally and formally their motivations, likes, supports, interests and so on in private or public Social Media tools, that most likely combine unstructured data, and structured data, and that variation in the traditional equation, is what makes the Big-Data concept relevant in today business environment.
Other Corporations, due to new technologies are able to measure and track business tools, and items, closely in a way that that millions of units that can be tracked on daily, hourly and even in minute by minute basis, providing relevant information to business models, that can understand behaviors of millions of users on real time.
The closer you are to the data generated, the closer you are to be in the Lead, and then the closer you are to succeed.
In today’s business environment, that is the difference between, a business success story or a business boring out of the ordinary failure story.
The BIG DATA and SAP HANA story, a marriage made in Heaven.
SAP HANA, has the unique ability to combine structured and unstructured data processing and transformation, in the same spot.
On top of that, HANA present a very strong proposal of real-real time replications, of source data reduces the lead time to identify and complete data loads in the data sources.
The integration of SAP, and non-sap systems, with Social Media data sources, or other data sources position this tool, very strong to be an effective tool to help in the resolution of Big-Data issues that your Corporation or clients are facing.
In my next blog, I will explain to you some methods to approach the analysis of unstructured data and, how cost effectively the combination of different technologies like HADOOP and SAP HANA can make your life easier, without breaking the bank.
Yours
Guillermo

** Legal Disclaimer :My opinions reflected in this article, are totally independent and not related to my current employer opinions.