I have tried Extreme Programming within a sprint and I think it is an excellent agile method for the Ops!

Part I: Agility 2003 I have discovered the notion of extreme programming more than 15 years ago. My job was to integrate and to develop pieces of code in Java for the IBM Websphere Business Integration server. We were a small team with light programming skills. A part of our job was to operate the software, the other part was to develop. It was in 2003. We were trying hard to stick to the specific framework we developed.

From GraphQL to a table view with React and Apollo

In the last post I have played with GraphQL. The next step is to actually query the data and display it. In this post I will use react (from Facebook) and the Apollo GraphQL client. Tooling React I won’t give in here an introduction of the language because you may find on the web thousands of very good tutorials and advocacy articles. Anyway I will explain briefly why I have chosen React.

Playing with Facebook's GraphQL (applied to AWS products and offers management)

About GraphQL GraphQL has been invented by Facebook for the purpose of refactoring their mobile application. Facebook had reached the limits of the standard REST API mainly because: Getting that much information was requiring a huge amount of API endpoints The versioning of the API was counter-productive regarding Facebook’s frequents deployements. But graphql is not only a query language related to Facebook. GraphQL is not only applicable to social data.

Linda's evalc, a (tuple)space oddity

For a change, I will start with a good soundtrack (youtube version for those who are spotify-less) This is my third article about the distributed coordination language Linda. The final target of the work is to use this coordination mechanism to deploy and maintain applications based on the description of their topology (using, for example, TOSCA as a DSL). Last time, I introduced a lisp based language (zygomys) as an embedded programing mechanism to describe the business logic.

350000 rows, 133 cols... From a huge CSV to DynamoDB (without breaking piggy-bank).

In this post I will explain how to: Parse a CSV file and extract only certain columns Create a table in DynamoDB Insert all the data with an adaptive algorithm in order to use the provisioned capacity Reduce the capacity once the insertion is done. Exploring the problem: AWS Billing In a previous post I explained how I was using dynamodb to store a lot of data about aws billing.