When to pull the plug on legacy technology

Ok, so I realized my first goal mentioned in the last post was not particularly clear. Let’s iterate and improve, agile, sprint, and all that jazz. First adjustment is: I will easily post at least one interesting piece every month of 2019. How about that?

Let’s go straight to business then: this month’s post will be about knowing when to decommission old IT assets, in particular legacy technology. I subscribe to a few newsletters, and one that catches my attention frequently are the editorials from sqlservercentral.com. Below you can find a quote from today’s editorial, by Steve Jones.

When working on a large project, it’s hard to sometimes keep perspective on whether to keep going or stop and change directions. We often try to continue to improve and fix a project, even when it is not going well. (…) If I’ve spent $20 or $20mm on a project and I am evaluating whether to spent an equivalent amount moving forward, I can’t continue to worry about the money I’ve already spent.
That money is gone whether I stop now or continue on. What I ought to do is look forward and decide if future spending is worth the investment. Certainly my reputation, and often some pain for switching or decommissioning existing work is to be considered, but that’s part of the value and too often we become afraid of abandoning something we ought to get rid of for a newer, better something else.

This is an interesting point of view on how to handle your virtual legacy. You have to make a call at some point. He then links to another article which is also interesting, by Leon Adato from DataDiversity.net.

If you find yourself at a point where you’re justifying not changing a system because that system cost you so much in the first place, (…) consider the goals you had for the original implementation of that legacy technology. Ask yourself:
– Are the original goals for this technology still valid for the business?
– Is the current technology meeting those goals?
If the answer to either one of those questions is no, then it’s time to pull the plug. The moment you say it’s OK to live without something fundamental, like security patches, support, or the ability to upgrade, you’re failing.

Those are some really important considerations, and probably questions all of us in the IT industry should ask ourselves more often. Go ahead and take a look at both articles. Following post is due next month (which is like, tomorrow!). Cya!

Posted in Cloud, Learning | Leave a comment

Year review and Goal setting tips for 2019 (and any moment actually)

Today is the first business day of the year, and after a much deserved break around year-end holydays, I stumbled upon a video with great tips on how to set goals for yourself. As 2019 just started it is a great time to think about my own goals, and I wanted to share this example here so it can help you too.

This video is from an youtuber/streamer that usually does videos about videogames, and although unexpected it is actually a very good example of how you can set goals for anything in your life. Lowko also highlights the importance of having separate goals to each aspect of your life, know how to measure your progress, look and work on them everyday, and have a positive approach in the whole process.

I like working with short-term and long-term goals for both my personal and professional life, constantly reviewing and adjusting them as I progress. Though I do not usually keep close track of progress this is adjustable, and the positivity thing is a good incentive in the whole process, instead of only celebrating at the end (which can be very, very far away for some goals). Also, accountability is something very important to me.

With renewed inspiration, I would like to share my first goal of the year with you, dear reader. It is directly related to this blog: since its start there was a considerable number of viewers that came here to get help, but over the past few years this has diminished. So the first short-term goal (2019) is to share more experiences and knowledge with more frequent posts. The mid-term goal (2 to 3 years frame) is to revert the audience stats and start to grow the page views again. The long-term goal will continue to be to share knowledge and help people with topics involved with the blog main themes (mostly Databases and Virtualization/Cloud).

I always had these ideas in my mind, but having them as goals written down is very helpful and gives some ground to check if I’m in the right track. To help measuring myself, I’m including a yearly statistics report here, which will be checked early next year to see how things are progressing (what else did you expect a DBA to do? lol).

This very first post is part of these new goals and I will close it as Lowko himself would say: I want to thank you all for reading, have an amazing day, do not forget to smile, and I will see you in the next one! Cya!

Posted in Learning, Professional Goals, Training | Tagged , , , | Leave a comment

2018 Training and Development goodies

Quick post on some portuguese training resources this time:

Posted in Cloud, Training | Leave a comment

Azure DW and cloud database alternatives

Quoting a great and concise article from SQLServerCentral: http://www.sqlservercentral.com/articles/Azure+SQL+Data+Warehouse+(ASDW)/172251

It is always good to know the options available when you are considering the best cloud provider for your needs. See below a summary of each option available currently.

Azure DWH part 28: The ASDW enemies

By Daniel Calbimonte, 2018/06/26


Superman has a Lex Luthor, the Ninja Turtles have The Shredder, the Smurfs have a Gargamel, Mozart had a Salieri, Edmon Dantès had a Ferdinand and in our case, Azure data warehouse has multiple enemies.

This time we will talk about the competitors. What do they offer? We will mention the following enemies:

  • Amazon Redshift
  • Alibaba Cloud Max Compute
  • Snowflake
  • Google Big Query
  • Teradata

Amazon Redshift

Amazon RedShift is the first enemy of ASDW. Amazon RedShift is a petabyte database service in the cloud. It is similar to ASDW, and as of now, it is the most popular cloud service (Azure is the second one). Redshift started in 2012 and is based on PostgreSQL, and it is easy to use and scale.

If you are familiar with PostgreSQL and you prefer over SQL Server, it is a good choice. ASDW is similar to SQL Server and it can be used with the SQL Server Management Studio. Then, if you like SQL Server, you will prefer ASDW.

Amazon offers a data warehouse in the cloud that is easy to maintain at a low cost. The biggest advantage in the cloud is that you can scale easily. If you have a data warehouse on-premises, if you need to scale, you need to buy new hardware, migrate data and suffer a lot. It can be easily integrated with well known BI tools like MicroStrategy, Jaspersoft, Pentaho, Tableau, Business Objects, Cognos, etc. It is also easy to create replicas of your data warehouse in different regions. It is also very easy to restore and encrypt your data.

I think it is the closest competitor because it offers a Database Platform with multiple services. Not only a data warehouse in the Cloud but also several other services.


Regarding prices, currently there are 3 options:

  • On-demand pricing is a pay per hour. The payment depends on the Memory, Storage, CPU, IO, Region. For example, the price for the category dc2.large is 0.25 USD per hour and a dc2.8x.large is 4.8 USD per hour.
  • Redshift spectrum query charges 5 USD per Terabyte scanned.
  • Reserved instance pricing lets you save 75% of the price on-demand, but you should use the services per 1-3 years.

For more information about prices, refer to this link: Amazon Redshift Pricing

Web page


Alibaba Cloud Max Compute

Alibaba is part of the Alibaba Cloud applications it is a database cloud-based used as a data warehouse. This Cloud Datawarehouse claims to be very secure compared to the competitors and complies with the HIPAA for healthcare and Germany’s C5 standard, PCI DSS.

It supports SQL, Graph, MapReduce, MPI Integration Algorithm. It works with a Batch and Historical Data Tunnel, which is the service provided for the users to import and export data with a service easy to scale.

The Data Hub is used to easily import incremental data. It uses a 2D table storage with compression to reduce costs. It also supports Computing-MapReduce and Computing Graph. It also supports REST API and it has his SDK, Graph, SPARK, and SQL. It is not very popular yet, but it is in the race.


Less than 1 GB or less is free. 1-100 GB costs between 0.0028 and 0.28 USD. 100 GB-9 TB costs between 0.0014 -13 USD approx. 9 – 90 TB between 12 USD to 120 USD approx. For more information about prices refer to this link: Max Compute Prices

Web page



Snowflake is another data warehouse database based in the cloud. It is a great data warehouse solution, but it is not part of a Data Platform. It means that is not part of a solution like AWS, Max Compute, and Azure that offer other additional services in the cloud. It is just a data warehouse in the cloud, but it is a good one.

Snowflake supports SQL to access data and you can access semi-structured data like JSON. It is possible to access to non-relational data using SQL like we do with ASDW using PolyBase. It also offers to scale immediately and columnar storage. You can connect to Snowflake using Java (JDBC) or ODBC. There are also web consoles, native connectors, and the command line.

Snowflake claims to have a better architecture designed for the cloud and it is optimized for better performance.


Currently, the prices depend on the Region and edition. There are several editions like the Enterprise, Standard, Premier, Enterprise for Sensitive Data and virtual private Snowflake.In USA west and east, all the versions cost 40 USD per TB per month for storage, with the compute costing 2.25 and 2 USD per hour. The Enterprise Edition and Enterprise for Sensitive Data cost 3 and 4 USD PER compute hour, respectively.

For more information about prices, refer to this link: Select Pricing For Your Region

Web page


Google Big Query

Big Query uses a serverless system that can handle petabytes of data. This is a solution that offers really fast queries; it is able to handle queries of petabytes of data in seconds. The Google guys are experts on Big Data and Google Big Query shows the power they have. Big Query is like any Google technology: cloud-based, fast, easy to learn, and simple.

It also uses SQL to access data. Big Query works with the Google Cloud Storage, and it works with the following technologies:

  • Informatica
  • Looker
  • Qlik
  • snapLogic
  • Tableau
  • Talend
  • Google Analytics 360 Suite

Big Query has a web console (Web UI) to access the data. It also includes a command line tool or you can use REST API to query information. You can use Java, Python or .NET to access data.

The Big Query concept is to run a query with terabytes of information in seconds or minutes. You do not need a virtual machine and you do not need to worry about configuring hardware and software.


The storage costs 0.2 USD per GB per month. The first 10 GB are free. If it is a long-term storage, it costs 0.1 USD per GB. The queries cost 5 USD per GB. Load and copy data is free. For more information about pricing, refer to this link: Big data pricing

Web Page




Teradata is a very popular database. It is commonly used as a data warehouse and also as a large scale database. It is one of the most popular databases in the world and many people like it. However, like SnowFlake, it is a single isolated solution and not part of a database platform like Azure or AWS or Alibaba Max Cloud. Those platforms offer not only a data warehouse, but also other solutions to complement it.

You have 3 options with Teradata:

  • IntelliCloud™ offers a Teradata database in the Cloud+Aster Analytics+Hadoop.
  • Public cloud offers a Teradata database+Aster Analytics in AWS or Azure.
  • Private Cloud offers virtualized VMs with IntelliCloud and Public Cloud.

You can query using Big Data technology using Teradata QueryGrid™. It is possible to have your database in the cloud, on-premises or in a hybrid environment. It also includes an In-Memory Intelligent Processing and a gateway to actionable Data Insight.


The prices vary by the different editions. The developer edition is free. The Base, Advanced, Enterprise and IntelliSphere have different prices per hour. For example, the EC2 m4.4xl costs 1.564 USD per hour and the Enterprise 4.17. For more information about prices, refer to this link: Teradata Software Pricing




In this article, we saw different alternatives to create our data warehouse in the cloud. As you can see, there are a lot of competitors. Many of them have almost the same features. The price options change over the time. Even the features improve each day. It is good to know the competitors and check all the options available in the Cloud Data Word house world.


Posted in Uncategorized | Leave a comment

Microsoft and Google – free courses

This post has content in Portuguese language, but most of courses are in english.

Aniway, it’s always good to learn. Why not learn another language?

Microsoft free courses in IA, DevOps and Cloud: https://news.microsoft.com/pt-br/microsoft-abre-ao-publico-cursos-de-treinamento-em-inteligencia-artificial/ & https://academy.microsoft.com

Google training at Brasil: http://idgnow.com.br/carreira/2018/04/23/google-oferece-treinamento-gratuito-para-4-mil-profissionais-de-ti-em-sp/


Posted in Azure, Cloud | Leave a comment

A new global threat: Meltdown and Spectre

Yes, the post title is singular, but there are two threats. It’s a joke, because both are kinda similar. In this day and age, we all should be familiar with the importance of digital security. Almost every year there are some brand new virus or bad-bad programs being released in the wild. You have to keep vigilant but at the same time don’t panic.

So this is a quick blog post to gather some resources on the newest threats that are so famous today. Take a moment to read throught it if you want more details, but if not just make sure to help us apply the updates as soon as they become available.

Understanding Meltdown & Spectre: What To Know About New Exploits That Affect Virtually All CPUs

Critical SQL Server Patches for Meltdown and Spectre – SQLServerCentral

Quote from Steve Jones, from SQLServerCentral:

It’s Time to Patch and Upgrade

By Steve Jones, 2018/01/05

I don’t want to be chicken little here, but the Meltdown/Spectre bugs have me concerned. I don’t know the scope of the vulnerabilities, as far as exploits go, but I do know the lax ways in which humans interact with machines, including running code, opening untrusted documents, and just making silly mistakes. No matter how careful you think you are, can you be sure everyone else in your organization is just as careful? Are you sure they won’t do something silly from a database server? Or do something from a server (or workstation) that has access to a database server? Or use a browser (yes, there’s an exploit)

PATCH your system, soon.

Vulernabilities in hardware are no joke, and even if you think you’re fairly safe, it’s silly to let this one go by and assume you won’t get hit. The advent of widely deployed scripting tools, botnets, and more mean that you never know what crazy mechanism might end up getting to your database server. Is it really worth allowing this when you can patch a system? This is a no brainer, a simple decision. Just schedule the patches. With all the news and media, I’m sure you can get some downtime approved in the next few weeks. After all, your management wouldn’t want to explain to their customers any data loss from this any more than you’d want to explain it to your boss.

We’ve got a page at SQLServerCentral that summarizes the links I’ve found for information, patches, etc. I’m sure things will change rapidly, and I’ll update the article as I get more information. The important things to note are that not all OSes have patches yet, and there are situations where you might not need to change anything. That’s good, as there are some preliminary reports of patches causing issues with performance (degrading it) for PostgreSQL And MongoDB systems. I did see this tweet about no effects on SQL Server, which is good, but YMMV.

Most of us know patching matters, and we need to do it periodically (even if it’s a pain), however, many of you are like me in that you rarely upgrade systems. Once they work, and because I have plenty of other tasks, I don’t look to necessarily upgrade a database platform for years. One downside to that is that a major vulnerability like the Meltdown/Spectre attacks is that patches likely won’t come out for old system and versions of SQL Server. That is the case here.

That means that if you’re on SQL 2005-, or even on older Windows OSes, you might really consider planning an upgrade. Even if you aren’t overly worried about this exploit, you won’t want a vulnerability to live for a long time in your environment. You never know when a firewall will change, server will move, or some malware will slip through (did I mention the browser exploit?). Plan on an upgrade. I’ve started asking about accelerating our upgrade plans, and you might think about that as well. I know management doesn’t want to spend money unneceesarily, but this feels necessary, and a good time to refresh your system to a supported version.

In general I like to delay my patches slightly from the world and not be on the bleeding edge. That’s fine, but don’t wait too long with this one. I would hope that most people get systems patched in the next month. If not, don’t expect any sympathy if you lose data.

Keep calm, patch your systems and…
dont panic

Posted in Uncategorized | Tagged , | Leave a comment

Kicking off 2018: thoughts on Cloud databases

It has been a while since I posted articles here, with a whopping 2 posts over the entire 2017 year (pun intended). To kick-start 2018, firstly I would like to wish happy new year to all dear readers.

And secondly I would like to ask: do you know what you want from the Cloud for next year? When designing applications or systems considering the newest options on the market, you have to consider lots of new information too. Some might be harsh on you after the implementation. I hope the below article is good to help you make a good decision on the database part of your next projects.

Enjoy the reading!

Why Amazon DynamoDB isn’t for everyone – How to decide when it’s right for you



Posted in AWS, Cloud | Tagged , | Leave a comment

Back to basics: a New browser with a familiar feel.

Over 15 years have passed, and here it goes again: a new browsing experience hits me! It is fun how something that is now so trivial can still surprise you. I am browsing all day, be it to perform work, search things, writing about work (I’m looking at you, e-mail) or even here. Welcome Vivaldi !!!

The deal is that I am a power-user, doing multiple things all day, and running heavy software (like databases, virtualization or server management stuff) at the same time that I need to have a bunch of browser tabs open to see what needs to be done (or send evidences of what I just did).

For my profile, I discovered that 4 CPU threads and a SSD system drive is a hard requirement, as my productivity is heavily impaired in any system missing both. And recently I also discovered that 12GB of RAM is a must. Unfortunately my current working machine has only 8GB of RAM, which will lead to frequent out-of-memory warnings and the eventual freeze of any app or the OS itself.

As the multiple browsing tabs are usually eating most of the RAM, I gave a try to diverse recent browsers to replace Chrome and its infinite hunger for memory – IE, Opera, Firefox and Neon. All had something missing or actually used more memory.

Even my beloved Opera had it’s quirks today, even considering I remember it’s version 5.10 as being revolutionary back in 2001 – yeah, I was one of those very few users who first saw tabbed browsing, and was in love with mouse gestures. You can check the first link on this post to confirm the release date and features of each Opera version to check for yourself. It’s a shame Opera could not keep up to it’s name recently, as they cut a lot of hardcore functionality in favor of simplicity. But just today I discovered Vivaldi browser, and it’s creator was part of that amazing team back in the 90’s!

Anyway, just a few minutes after finding this out, Vivaldi is up and running and I’m writing this post on it. It’s using about 190mb of RAM with 6 tabs open, while I left Chrome open for the same amount of time with just 2 tabs and it climbed from 250mb to 260mb consumed while being in the background (this is for the highest-consuming tab, of course you have to consider all but it’s a great starting point).

And Vivaldi is doing this while feeling faster, reviving my mouse gestures (try it, it’s amazing) and still bringing something new to the table: page tiling. See it for yourself:

This was a really great surprise, let’s find out what other treats this team has in store for us. Seriously, try it now!


Posted in Uncategorized, Windows | Tagged , , , | Leave a comment

Some SQL Server performance goodies

Hi folks. It’s been a long time, but Databases Never Die (this could be a movie name, don’t you think?). Below are a few performance articles to think about performance.

First one is SQL on Windows vs. SQL on Linux. Some interesting numbers here: http://www.sqlservercentral.com/articles/vnext/152671/?utm_source=SSC&utm_medium=pubemail

Then some tips about compression, this is really worth checking: https://sqlperformance.com/2017/01/sql-performance/compression-effect-on-performance

And finally some index goodness. Indexes are Always a good thing to learn more about: http://www.sqlservercentral.com/blogs/confessions-of-a-microsoft-addict/2017/02/08/dba-101-what-you-may-be-missing-with-missing-indexes/?utm_source=SSC&utm_medium=pubemail



Posted in Uncategorized | Leave a comment

SQL Server 2016 Management Studio Download

It’s that time of the year. No, I’m not talking about christmas, I’m talking about new product cycles from Microsoft. For SQL Server it usually happens on even years, it’s 2016 so this year we have a new SQL Version. And if you need to manage any new server, you need the most current SQL Server Management Studio too.

This post is about SSMS, not the features of SQL 2016 (wich are awesome, but will be posted separately). This is the first time we get a dedicated product portal for SSMS, take a look at: https://msdn.microsoft.com/en-us/library/mt238290.aspx . This version is really great, and the portal is updated with new versions frequently. It’s the first time I feel comfortable using a newer version to manage all my servers (running several old versions of SQL Server).

Anyway I always give a direct link to the english version (the only one you should ever consider, if you work with IT), so here it goes: http://go.microsoft.com/fwlink/?linkid=832812&clcid=0x409

Also, this is the first time that SQL Server Data Tools (version 2015) are able to create SSIS packages with backwards compatibility (down to 2012 version), so you should download and use this version too: https://msdn.microsoft.com/en-us/library/mt204009.aspx . Again, here is the direct link to the english version: https://go.microsoft.com/fwlink/?LinkID=832313&clcid=0x409

Tip: to create SSIS packages for older versions of Integration Services: create a new project, go to Project, open Properties, select Configuration Properties, then General. Change the TargetServerVersion property for this particular Project, and you’re ready! You can do this for each Project, and develop with no mess for various versions of the SSIS engine.



Posted in SQLServer | Tagged , , , , , , , , , | Leave a comment