DevOps Engineer
Sample CV
DevOps Engineer Summary
DevOps Engineer with 9 years of experience in technologies from the web, mobile, security, cloud, big data, and data science / AI projects. He’s created products for telecommunication financial and start-up industries. He’s worked in developing, designing, creating, and making architectures from financial and banking systems to Health Care and entertainment. He founded a company called Greenshark responsible for consulting services related to Cloud, Virtualization, Machine Learning / Statistical Learning, and more.
DevOps April 2020 – Current
DevOps Engineer Project description:
DevOps and Kafka architect for the RTP platform, Integrating multiple systems for radio stations on the internet.
Responsibilities:
- Jenkins
- Kafka Connect
- Kafka Schema Registry
- Kafka
- Ansible
- Terraform
- Bash scripting
- Python
- AWS stack
- Apache Nifi
Technologies and tools:
React Native, MongoDB, Python, Kubernetes, Scala, Kafka, Docker.
DevOps – March 2019 – November 2019
Project description:
DevOps Support for CRM web/mobile application.
Responsibilities:
Have to work with next technologies:
- AWS CloudFront
- AWS S3
- AWS EC2
- AWS Route53
- AWS ECS
- AWS Codecommit
- AWS EMR
- AWS SQS
- AWS AppSync
Technologies and tools:
AWS, React Native, MongoDB, Python, Kubernetes, Scala, Kafka, Docker.
Consulting, and Training – November 2018 – Current
Project description:
Delivering consulting and training courses in technologies.
Responsibilities:
I had to teach the following courses:
- Cloud Foundry integration against Vsphere- Thales Company – Hong Kong
- Kubernetes either AWS and on-premise implementation- Athabasca University
- Spring back office application – thyssenkrupp – México
- Elasticsearch Analytics – Edenred
- WSO2 Training Course for Colombian Company
- Oracle DB back office application – thyssenkrupp – México
- PostgreSQL – Faurecia – México
- Kafka – T-Systems – Hungary
- Machine Learning – T-Systems – Germany
- Python for Data Science – T-Systems Training Course Remote
- Big Data with Data Lake (Spark and Hadoop) – Authority Monetary – Hong Kong
- AWS – Freeagent – California
- Docker & Docker administration – Athabasca University
- Python Training course – T-Systems – Germany
- R – Training Course Authority Monetary – Hong Kong
- SysML Fixing issues- Continental – México
- Jira Confluence Implementation- Configura – Malaysia
- Deep Learning – T-Systems – France
- Reinforcement Learning, Fixing modeling and data ingestion improvements – T-Systems – France
- Redis for Administrators – T-Systems – Germany
- Technical Patterns
- GCP modern data lakes – T-Systems – Germany
- GCP Big Data Analytics – T-Systems – Germany
- TDD – Banxico & Buró de crédito
- Big Data for sales – Nobleprog Europe
- ATDD using cucumber.js technologies + Selenium – France
- API GEE on-premise administration + user supply – France
DevOps – June 2017 – November 2018
Project description:
Working on Santiago de Chile, Buenos Aires Argentina, Hong Kong, Bogota Colombia, USA California.
Responsibilities:
- Managing 15 people in different levels of the company, devs, PM, and strong communication with the
- Solutions on data and mastering through data lake Cloudera
- Migrating solutions from On-premise architecture to GCP, Big Query, Cloud Computing, and Kubernetes
- Medical project to ingest data and analyze it into a Data lake using EMR, java, and Scheduling executions with CloudWatch, Oozie, Tuning Hadoop HDFS, and Hive execution.
- Developing data models for statistics and probability purposes.
- Analyzing data with Elasticsearch for DaaS.
- Connecting Thomson Reuters API with Spring Project and Analyze the methods in EMA and Real time.
- Creating an environment sandbox for testing in a data scientist ecosystem (Data dummy, connection to Keras and Tensorflow).
- Methodology oriented with Generative and Discriminative models for choosing price in macroeconomics assets.
- Data pipeline using AWS technologies like Glue, Lambda, Spark, Hive, and SAS for analytics purposes.
Technologies and tools:
Cloud Foundry, Kubernetes, Hadoop, Kafka, Kinesis, Spark, Python, Java, Scala, Amazon Web Services, Elastic Search, GCP for data analysis, Logstash, Java Spring, Kibana, Node.js, Javascript, Pytorch, R and Terraform.
Big Data Architect – April 2016 – June 2017
Project description:
Working on BBVA Mexico HQ.
Responsibilities:
- Solutions on data and mastering through data lake Cloudera
- Tuning, performance, structure, data governance, and sanity with Hive
- Quotas, and kernels with Jupyter Notebook
- Oozie workflows to schedule tasks over the day
- Prediction in real-time with Spark Streaming and Spark ML
- Designing models through Scala Graphx to solve financial risk problems
- Running projects using Mesos as container coordinator
- AWS for HSM security and on-transit encryption
- Big table supporting HBase on-premise data analytics
- Pentaho with Tableau for ETL and visualizing KPI
- Knowledge transfer to colleagues and new talents about the technologies
Technologies and tools:
Spark, Cloudera, Amazon Web Services, Google Cloud Platform, Scala, Java, Python, Kubernetes, Docker, Tableau, Microstrategy, Kafka, Kafka Confluent, Cassandra, Exadata, Teradata and PostgreSQL, Chef, GCP for web exposure, Ansible, and Terraform.
Software Architect – October 2014 – March 2016
Project description:
Designing solution for monitoring Banorte HSM systems.
Responsibilities:
- utilizatión of docker to containerized applications in production, using Swarm
- Handling Real-time events and notifications for Banorte
- Designing secure systems for web applications based on 3000 current users
- Designing and developing back-end reliable banking mobile applications
- Make sure everyone is using the architecture and using it correctly
- Knowledge transfer to colleagues and new talents about the technologies
Technologies and tools:
Java Spring, Hibernate, ActiveMQ, Kafka, Hadoop Map Reduce and Hive, Node.js, Javascript, MySQL, MongoDB, AWS.
Software Engineer – February 2014 – September 2014
Project description:
Developing solutions with node.js
Responsibilities:
- Creating new infrastructure with AWS into the cloud
- Migrating old system in PHP to node.js and javascript technologies
- Creating REST APIS to communicate different and decoupled systems
- Migrating MySQL Database to MongoDB
- Lead teams to achieve business value through technology
Technologies and tools:
Node.js, PHP, MySQL, MongoDB, AWS, Android, Java.
Full Stack Web-mobile developer – November 2013 – February. 2014
Project description:
Support to current applications in Telcel.
Responsibilities:
- Creating backend in Python – Django for Mobile applications for Sanborns
- Developed a Mobile application for a telephony plan called “Plan Viajero”
Technologies and tools:
Android, iOS, Ruby On Rails, Node.js, MongoDB, AWS, Google Cloud Compute.
Why Choose Sonatafy
Voted #1 Most Trusted US-Based Nearshore Software Company of 2020, Sonatafy provides access to the TOP 1% of Software Development resources in Latin America.
With Sonatafy, you $25 to $53 per hour for Top-Tier DevOps Engineer Talent! Sonatafy’s Talent Acquisition can place qualified engineers in as quick as two weeks, guaranteeing you best-in-class service.
Our Developers Engineers:
✓ Highly qualified, top tier talent
✓ Proficient English-speaking, affordable
✓ Resources placed to match your time zone
Sonatafy can AUDIT, VISUALIZE, TRANSFORM, VERIFY, and MAINTAIN your complete development lifecycle.
Searches related to:
ActiveControl Certified For Use In SAP SuccessFactors Employee Central Payroll
For more information talk to us today or give us a call 619-736-7218 and follow us on LinkedIn and Facebook for news, updates, and discussions with industry professionals. #sonatafytechnology #devops #Engineer