Jobs
Interviews

5 Aws Platforms Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

5 - 10 Lacs

Pune, Maharashtra, India

On-site

We are seeking a highly skilled Solutions Architect with 5+ years of experience in Data Engineering, specializing in AWS platforms, Python, and SQL. You will be responsible for designing and implementing scalable, cost-effective data pipelines on AWS, optimizing data storage, and supporting the ingestion and transformation of large datasets for business reporting and AI/ML exploration. This role requires strong functional understanding of client requirements, the ability to deliver optimized datasets, and adherence to security and compliance standards. Roles & Responsibilities: Design and implement scalable, cost-effective data pipelines on the AWS platform using services like S3, Athena, Glue, RDS, etc. Manage and optimize data storage strategies for efficient retrieval and integration with other applications. Support the ingestion and transformation of large datasets for reporting and analytics. Develop and maintain automation scripts using Python to streamline data processing workflows. Integrate tools and frameworks like PySpark to optimize performance and resource utilization. Implement monitoring and error-handling mechanisms to ensure reliability and scalability of data solutions. Work closely with onsite leads and client teams to gather and understand functional requirements. Collaborate with business stakeholders and the Data Science team to provide optimized datasets suitable for Business Reporting and AI/ML Exploration . Document processes, provide regular updates, and ensure transparency in deliverables. Optimize AWS service utilization to maintain cost-efficiency while meeting performance requirements. Provide insights on data usage trends and support the development of reporting dashboards for cloud costs. Ensure secure handling of sensitive data with encryption (e.g., AES-256, TLS) and role-based access control using AWS IAM. Maintain compliance with organizational and industry regulations for data solutions. Skills Required: Strong emphasis on AWS platforms . Hands-on expertise with AWS services such as S3, Glue, Athena, RDS, etc. Proficiency in Python for building Data Pipelines for ingesting data and integrating it across applications. Strong proficiency in SQL . Demonstrated ability to design and develop scalable Data Pipelines and Workflows. Strong problem-solving skills and the ability to troubleshoot complex data issues. Preferred Skills: Experience with Big Data technologies , including Spark, Kafka, and Scala, for Distributed Data processing. Hands-on expertise in working with AWS Big Data services such as EMR, DynamoDB, Athena, Glue, and MSK (Managed Streaming for Kafka). Familiarity with on-premises Big Data platforms and tools for Data Processing and Streaming. Proficiency in scheduling data workflows using Apache Airflow or similar orchestration tools like One Automation, Control-M, etc. Strong understanding of DevOps practices , including CI/CD pipelines and Automation Tools. Prior experience in the Telecommunications Domain , with a focus on large-scale data systems and workflows. AWS certifications (e.g., Solutions Architect, Data Analytics Specialty) are a plus. QUALIFICATION: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related technical field, or equivalent practical experience.

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

surat, gujarat

On-site

As a Digital Marketing Specialist at Braincuber Technologies, you will be responsible for comparing your company's advertising techniques and offerings with those of competing businesses. Your role will involve maintaining a profitable message for the company, conducting research to ensure that offerings are priced competitively to provide value to potential clients while ensuring the company's success. Monitoring changing trends in advertising consumption data, ensuring that clients and consumers view the company's advertising projects, and make purchases will be essential tasks. To excel in this role, you should possess a bachelor's or master's degree in computer science, information technology, or a related field. Prior experience in planning, designing, developing, architecting, and deploying cloud applications on AWS platforms is required. Knowledge of fundamental AWS services, applications, and best practices for AWS architecture is crucial. You should also have practical expertise in various disciplines like database architecture, business intelligence, machine learning, advanced analytics, big data, Linux/Unix administration, Docker, Kubernetes, and working with cross-functional teams. Your responsibilities will include understanding the organization's current application infrastructure, proposing enhancements or improvements, defining best practices for application deployment and infrastructure upkeep, collaborating with the IT team on web app migrations to AWS, implementing low-cost migration techniques, developing reusable and scalable programs, and performing software analysis, testing, debugging, and updating. Additionally, you will create serverless applications using multiple AWS services, examine and evaluate programs to identify technical faults, and recommend solutions. If you have experience with Agile methodologies and possess AWS credentials or other professional certifications, we would like to meet you. Join our team at Braincuber Technologies and be part of a dynamic environment where your skills will contribute to the company's success.,

Posted 2 weeks ago

Apply

8.0 - 11.0 years

8 - 11 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Roles and Responsibilities: Responsibilities: Design, implement, and maintain the infrastructure and deployment pipeline on AWS and OpenShift, leveraging Terraform for Infrastructure as Code (IAC) management. Utilize Terraform workflow automation, ensuring seamless collaboration and efficient infrastructure changes. Collaborate with software development, QA, and operations teams to ensure seamless integration of infrastructure and application components. Monitor, troubleshoot, and optimize infrastructure performance, ensuring high availability and scalability. Implement and maintain CI/CD pipelines, incorporating automation and testing frameworks to improve deployment processes and reduce time to market. Proactively identify and address potential issues in infrastructure, security, and performance. Continuously research and implement industry best practices and emerging technologies to enhance and evolve the DevOps process. Document and maintain infrastructure architecture, policies, and procedures. Deploy, automate, maintain, and manage an AWS production system Making sure AWS production systems are reliable, secure, and scalable Resolving problems across multiple application domains and platforms using system troubleshooting and problem-solving techniques Automating different operational processes by designing, maintaining, and managing tools Provide primary operational support and engineering for all Cloud and Enterprise deployments Lead the organization s platform security efforts by collaborating with the core engineering team Develop policies, standards, and guidelines for IAC and CI/CD that teams can follow Qualifications Required Skills: Technology Stack: AWS, Kubernetes (K8s), OpenShift, Terraform, Jenkins, CI/CD Pipeline Experience building AWS platforms Extensive AWS and OpenShift experience Experience working in an agile team Strong experience with using infrastructure as a code Extensive CI/CD experience Strong experience in leveraging Terraform for Infrastructure as Code Required Experience & Education: 8-11 years of experience Proven experience with architecture, design, and development of large-scale enterprise application solutions. College degree (Bachelor) in related technical/computer science or equivalent work experience

Posted 1 month ago

Apply

8.0 - 12.0 years

25 - 30 Lacs

Hyderabad

Work from Office

Overview We are seeking an experienced Cloud Delivery engineer to work horizontally across our organization, collaborating with Cloud Engineering, Cloud Operations, and cross-platform teams. This role is crucial in ensuring that cloud resources are delivered according to established standards, with a focus on both Azure and AWS platforms. The Cloud Delivery engineer will be responsible for delivery of Data and AI platforms. Responsibilities Collaborate closely with Cloud Engineering, Cloud Operations, and cross-platform teams to ensure seamless delivery of cloud resources on both Azure and AWS. Gather and analyze requirements from various teams, translating them into actionable cloud delivery plans. Provision the cloud resources, ensuring they adhere to approved architectures and organizational standards on both Azure and AWS. Develop and maintain delivery processes and methodologies for multi-cloud deployments, with a focus on Data and AI platforms. Coordinate with project managers and business stakeholders to align cloud deliveries with project timelines and business objectives. Implement and maintain quality assurance processes for cloud deliveries across platforms. Stay up-to-date with the latest Azure and AWS services, features, and best practices, particularly in Data and AI, and incorporate them into delivery strategies. Identify and mitigate risks associated with cloud deployments and resource management in multi-cloud environments. Create and maintain documentation for cloud delivery processes, standards, and best practices across platforms. Collaborate with security teams to ensure cloud resources are deployed with appropriate security controls and comply with organizational policies in both Azure and AWS. Participate in capacity planning and cost optimization initiatives for multi-cloud resources. Facilitate knowledge sharing sessions and training programs to enhance the team's expertise in Azure, AWS, and Data and AI technologies. Establish and track key performance indicators (KPIs) for cloud delivery efficiency and quality across platforms. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field; Master's degree preferred. 8+ years of experience in IT, with at least 4 years focused on cloud technologies, including substantial experience with both Azure and AWS. Strong understanding of Azure and AWS services, architectures, and best practices, particularly in Data and AI platforms. Certifications in both Azure (e.g., Azure Solutions Architect Expert) and AWS (e.g., AWS Certified Solutions Architect - Professional). Experience in working with multiple teams cloud platforms. Demonstrated ability to work horizontally across different teams and platforms. Strong knowledge of cloud security principles and compliance requirements in multi-cloud environments. working experience of DevOps practices and tools applicable to both Azure and AWS. Experience with infrastructure as code (e.g., ARM templates, CloudFormation, Terraform). Proficiency in scripting languages (e.g., PowerShell, Bash, Python). Solid understanding of networking concepts and their implementation in Azure and AWS.

Posted 1 month ago

Apply

2 - 5 years

4 - 7 Lacs

Hyderabad

Work from Office

Overview Be part of the DevOps and cloud infrastructure team that is responsible for Cloud security, infrastructure provisioning, maintaining existing platforms and provides our partner teams with guidance for building, maintain and optimizing integration and deployment pipelines as code for deploying our applications to run in AWS & Azure. Responsibilities Deploy infrastructure in AWS AND Azure cloud using terraform and Infra-as-code best practices. Triage, troubleshoot incidents related to our AWS and Azure cloud resources. Participate in development of Ci/CD workflows to launch application from build to deployment using modern devOps tools like Kubernetes, ArgoCD/Flux, terraform, helm. Create automation and tooling using Python, Bash or any OOP language. Configure monitoring, respond to incidents triggered by our existing notification systems. Ensure our existing Ci/CD pipelines operate without interruption and are constantly optimized as needed. Create automation, tooling and integration in our Jira projects that make your life easy and benefit the entire org and business. Evaluate and support onboarding of 3rd party SaaS applications or work with teams to integrate new tools and services into existing apps. Create documentation, runbooks, disaster recovery plans and processes. Qualifications 2+ years of experience deploying infrastructure to Azure and AWS platforms, AKS, EKS, ECR, ACR, Key Vault, IAM, Entra, VPC, VNET, IAM etc. 1+ year of experience with using terraform or writing terraform modules. 1+ year of experience with Git, Gitlab or GitHub. 1+ year of creating Ci/CD pipelines in any templatized format, yaml or jenkins. 1+ year of Kubernetes, ideally running workloads in a production environment. 1+ year of Bash and any other OOP language. Good understanding of software development lifecycle. Familiarity with: Monitoring tools like Datadog, Splunk etc. Automated build process and tools Able to administer and run basic SQL queries in Postgres, mySQL or any relational database. Cloud Security Posture management. Current skills in following technologies: Kubernetes Terraform

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies