Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
20 - 30 Lacs
Hyderabad
Work from Office
Design and development of cloud-hosted web applications for insurance industry from high-level architecture and network infrastructure to low-level creation of site layout, user experience, database schema, data structure, work-flows, graphics, unit testing, an end to end integration testing, etc. Working from static application mock-ups and wireframes, developing front-end user interfaces and page templates in HTML 5, CSS, SAAS, LESS TypeScript, bootstrap, Angular and third-party controls like Kendo UI/Infragistics. Proficiency in AWS services like Lambda, EC2, S3, and IAM for deploying and managing applications. Excellent programming skills in Python with the ability to develop, maintain, and debug Python-based applications. Develop, maintain, and debug applications using .NET Core and C#. Stay up to date with the latest industry trends and technologies related to PostgreSQL, AWS, and Python. Design and implement risk management business functionality and in-database analytics. Identify complex data problems and review related information to develop and evaluate options and design and implement solutions. Design and develop functional and responsive web applications by collaborating with other engineers in the Agile team. Develop REST API and understand WCF Services. Prepare documentations and specifications.
Posted 1 month ago
4.0 - 9.0 years
0 - 3 Lacs
Pune
Work from Office
We are seeking a highly skilled and motivated FullStack Node.js Developer to join our dynamic engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable backend services, APIs, and integrations, as well as contributing to the development of our user interfaces. This role requires strong expertise in Node.js, PostgreSQL, and a solid understanding of various AWS services, including S3, Athena, RDS, and EC2. Experience with Stripe integration for payment processing and a proven ability to both write and consume APIs are essential, along with proficiency in front-end technologies like HTML and CSS. Key Responsibilities: Design, develop, and maintain high-performance, scalable, and secure backend services using Node.js. Develop and implement RESTful APIs for various internal and external applications, ensuring high availability and performance. Integrate with third-party APIs, including payment gateways like Stripe, and other external services. Manage and optimize PostgreSQL databases, including schema design, query optimization, and data migration. Work extensively with AWS services, specifically: Amazon S3: Store and manage application data, backups, and other static assets. AWS Athena: Develop and execute analytical queries on data stored in S3 for reporting and insights. Amazon RDS (PostgreSQL): Configure, manage, and optimize PostgreSQL instances within RDS. Amazon EC2: Deploy, manage, and scale Node.js applications on EC2 instances. Develop responsive and engaging user interfaces using HTML and CSS. Implement and maintain secure coding practices, including data encryption, authentication, and authorization mechanisms. Collaborate with the client and team to define requirements and deliver high-quality software solutions. Participate in code reviews, ensuring code quality, maintainability, and adherence to best practices. Troubleshoot and debug production issues, providing timely resolutions. Contribute to the continuous improvement of our development processes and tools. Qualifications: Technical Skills: Proven experience as a Node.js Developer with a strong understanding of its asynchronous nature, event loop, and best practices. Expertise in database design, development, and optimization with PostgreSQL. Hands-on experience with AWS services, including: S3: Object storage and management. Athena: Serverless query service for S3 data. RDS (PostgreSQL): Managed relational database service. EC2: Virtual servers for deploying applications. Proficiency in designing, building, and consuming RESTful APIs. Experience integrating with payment processing platforms, specifically Stripe. Strong proficiency in HTML5 and CSS3, including responsive design principles. Familiarity with version control systems (Git). Understanding of software development lifecycle (SDLC) and agile methodologies. Experience with the Redis server for caching, session management, and task scheduling. Experience 5+ years of experience in fullStack development with Node.js. 3+ years of experience working with PostgreSQL. 5+ years of experience with AWS cloud services. Nice to have Familiarity with other AWS services (e.g., Lambda, SQS, SNS). Experience with microservices architecture. Familiarity with JavaScript frameworks/libraries (e.g., React, Angular, Vue.js) Soft Skills Excellent problem-solving and analytical skills. Strong communication and interpersonal abilities. Ability to work independently and as part of a team. Proactive and eager to learn new technologies.
Posted 1 month ago
5.0 - 7.0 years
3 - 7 Lacs
Hyderabad, Bengaluru
Work from Office
Key Responsibilities: Design, implement, and maintain cloud-based infrastructure on AWS. Manage and monitor AWS services, including EC2, S3, Lambda, RDS, CloudFormation, VPC, etc. Develop automation scripts for deployment, monitoring, and scaling using AWS services. Collaborate with DevOps teams to automate build, test, and deployment pipelines. Ensure the security and compliance of cloud environments using AWS security best practices. Optimize cloud resource usage to reduce costs while maintaining high performance. Troubleshoot issues related to cloud infrastructure and services. Participate in capacity planning and disaster recovery strategies. Monitor application performance and make necessary adjustments to ensure optimal performance. Stay current with new AWS features and tools and evaluate their applicability for the organization. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as an AWS Engineer or in a similar cloud infrastructure role. In-depth knowledge of AWS services, including EC2, S3, RDS, Lambda, VPC, CloudWatch, etc. Proficiency in scripting languages such as Python, Shell, or Bash. Experience with infrastructure-as-code tools like Terraform or AWS CloudFormation. Strong understanding of networking concepts, cloud security, and best practices. Familiarity with containerization technologies (e.g., Docker, Kubernetes) is a plus. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication skills, both written and verbal. AWS certifications (AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are preferred. Preferred Skills: Experience with serverless architectures and services. Knowledge of CI/CD pipelines and DevOps methodologies. Experience with monitoring and logging tools like CloudWatch, Datadog, or Prometheus. Knowledge in AWS FinOps
Posted 1 month ago
10.0 - 15.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Grade : 7 Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making. Communication:Strong in strategic communication and stakeholder engagement. Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources.
Posted 1 month ago
4.0 - 9.0 years
25 - 35 Lacs
Bengaluru
Hybrid
Dodge Position Title: Software Engineer STG Labs Position Title: Location: Bangalore, India About Dodge Dodge Construction Network exists to deliver the comprehensive data and connections the construction industry needs to build thriving communities. Our legacy is deeply rooted in empowering our customers with transformative insights, igniting their journey towards unparalleled business expansion and success. We serve decision-makers who seek reliable growth and who value relationships built on trust and quality. By combining our proprietary data with cutting-edge software, we deliver to our customers the essential intelligence needed to excel within their respective landscapes. We propel the construction industry forward by transforming data into tangible guidance, driving unparalleled advancement. Dodge is the catalyst for modern construction. https://www.construction.com/ About Symphony Technology Group (STG) STG is a Silicon Valley (California) based private equity firm that has a long and successful track record of transforming high potential software and software-enabled services companies, as well as insights-oriented companies into definitive market leaders. The firm brings expertise, flexibility, and resources to build strategic value and unlock the potential of innovative companies. Partnering to build customer-centric, market winning portfolio companies, STG creates sustainable foundations for growth that bring value to all existing and future stakeholders. The firm is dedicated to transforming and building outstanding technology companies in partnership with world class management teams. With over $5.0 billion in assets under management, including a recently raised $2.0 billion fund. STGs expansive portfolio has consisted of more than 30 global companies. STG Labs is the incubation center for many of STG’s portfolio companies, building their engineering, professional services, and support delivery teams in India. STG Labs offers an entrepreneurial start-up environment for software and AI engineers, data scientists and analysts, project and product managers and provides a unique opportunity to work directly for a software or technology company. Based in Bangalore, STG Labs supports hybrid working. https://stg.com Roles and Responsibilities Design, build, and maintain scalable data pipelines and ETL processes leveraging AWS services. Collaborate closely with data architects, business analysts, and DevOps teams to translate business requirements into technical data solutions. Apply SDLC best practices, including planning, coding standards, code reviews, testing, and deployment. Automate workflows and optimize data pipelines for efficiency, performance, and reliability. Implement monitoring and logging to ensure the health and performance of data systems. Ensure data security and compliance through adherence to industry and internal standards. Participate actively in agile development processes and contribute to sprint planning, stand-ups, retrospectives, and documentation efforts. Qualifications Hands-on working knowledge and experience is required in: Data Structures Memory Management Basic Algos (Search, Sort, etc) Hands-on working knowledge and experience is preferred in: Memory Management Algorithms: Search, Sort, etc. AWS Data Services: Glue, EMR, Kinesis, Lambda, Athena, Redshift, S3 Scripting & Programming Languages: Python, Bash, SQL Version Control & CI/CD Tools: Git, Jenkins, Bitbucket Database Systems & Data Engineering: Data modeling, data warehousing principles Infrastructure as Code (IaC): Terraform, CloudFormation Containerization & Orchestration: Docker, Kubernetes Certifications Preferred : AWS Certifications (Data Analytics Specialty, Solutions Architect Associate).(Preferred Skill). Role & responsibilities Preferred candidate profile
Posted 1 month ago
5.0 - 8.0 years
30 - 35 Lacs
Hyderabad
Work from Office
Requirements 4+ years of overall experience in software development. Hands on experience in Python(Oops concepts) for 3+ years with extensive ability to write complex Python code. Hands on experience in AWS(Lambda, S3, EC2, Step function) for 2+ years. Knowledge of code versioning tools(GIT) and databases. Should have skill to work in JIRA workflows. Good to have experience in Azure. Should have good logical thinking. Strong analytical and debugging skills. Strong communication skills.
Posted 1 month ago
7.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Role Overview We are seeking an experienced Data Engineer with 7-10 years of experience to design, develop, and optimize data pipelines while integrating machine learning (ML) capabilities into production workflows. The ideal candidate will have a strong background in data engineering, big data technologies, cloud platforms, and ML model deployment. This role requires expertise in building scalable data architectures, processing large datasets, and supporting machine learning operations (MLOps) to enable data-driven decision-making. Key Responsibilities Data Engineering & Pipeline Development Design, develop, and maintain scalable, robust, and efficient data pipelines for batch and real-time data processing. Build and optimize ETL/ELT workflows to extract, transform, and load structured and unstructured data from multiple sources. Work with distributed data processing frameworks like Apache Spark, Hadoop, or Dask for large-scale data processing. Ensure data integrity, quality, and security across the data pipelines. Implement data governance, cataloging, and lineage tracking using appropriate tools. Machine Learning Integration Collaborate with data scientists to deploy, monitor, and optimize ML models in production. Design and implement feature engineering pipelines to improve model performance. Build and maintain MLOps workflows, including model versioning, retraining, and performance tracking. Optimize ML model inference for low-latency and high-throughput applications. Work with ML frameworks such as TensorFlow, PyTorch, Scikit-learn, and deployment tools like Kubeflow, MLflow, or SageMaker. Cloud & Big Data Technologies Architect and manage cloud-based data solutions using AWS, Azure, or GCP. Utilize serverless computing (AWS Lambda, Azure Functions) and containerization (Docker, Kubernetes) for scalable deployment. Work with data lakehouses (Delta Lake, Iceberg, Hudi) for efficient storage and retrieval. Database & Storage Management Design and optimize relational (PostgreSQL, MySQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. Manage and optimize data warehouses (Snowflake, BigQuery, Redshift, Databricks) for analytical workloads. Implement data partitioning, indexing, and query optimizations for performance improvements. Collaboration & Best Practices Work closely with data scientists, software engineers, and DevOps teams to develop scalable and reusable data solutions. Implement CI/CD pipelines for automated testing, deployment, and monitoring of data workflows. Follow best practices in software engineering, data modeling, and documentation. Continuously improve the data infrastructure by researching and adopting new technologies. Required Skills & Qualifications Technical Skills: Programming Languages: Python, SQL, Scala, Java Big Data Technologies: Apache Spark, Hadoop, Dask, Kafka Cloud Platforms: AWS (Glue, S3, EMR, Lambda), Azure (Data Factory, Synapse), GCP (BigQuery, Dataflow) Data Warehousing: Snowflake, Redshift, BigQuery, Databricks Databases: PostgreSQL, MySQL, MongoDB, Cassandra ETL/ELT Tools: Airflow, dbt, Talend, Informatica Machine Learning Tools: MLflow, Kubeflow, TensorFlow, PyTorch, Scikit-learn MLOps & Model Deployment: Docker, Kubernetes, SageMaker, Vertex AI DevOps & CI/CD: Git, Jenkins, Terraform, CloudFormation Soft Skills: Strong analytical and problem-solving abilities. Excellent collaboration and communication skills. Ability to work in an agile and cross-functional team environment. Strong documentation and technical writing skills. Preferred Qualifications Experience with real-time streaming solutions like Apache Flink or Spark Streaming. Hands-on experience with vector databases and embeddings for ML-powered applications. Knowledge of data security, privacy, and compliance frameworks (GDPR, HIPAA). Experience with GraphQL and REST API development for data services. Understanding of LLMs and AI-driven data analytics.
Posted 1 month ago
9.0 - 12.0 years
20 - 25 Lacs
Hyderabad
Work from Office
designing, managing, and optimizing our cloud infrastructure to ensure high availability, reliability, scalability of services Architect, deploy, maintain AWS infrastructure using Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation Required Candidate profile experience in a Site Reliability Engineer or DevOps role, with a focus on AWS cloud infrastructure AWS services such as EC2, S3, RDS, VPC, Lambda, CloudFormation, and CloudWatch.
Posted 1 month ago
5.0 - 8.0 years
15 - 22 Lacs
Chennai
Work from Office
.....are days'We are hiring passionate DevOps professionals with strong Kubernetes and AWS experience. Immediate joiners or candidates with 15 days notice preferred. Key Responsibilities: Administer and manage Kubernetes clusters (CKA preferred) Implement Infrastructure as Code using Terraform or CloudFormation Automate CI/CD pipelines using Jenkins, GitOps, and Helm charts Manage and optimize AWS services: ALB/NLB, Lambda, RDS, S3, Route 53, API Gateway, CloudFront Monitor systems with tools like Datadog, Prometheus Apply security best practices and ensure cost optimization Collaborate with Agile teams to deliver scalable and reliable infrastructure Required Skills: Kubernetes (Must) AWS (EC2, RDS, S3, Lambda, etc.) Docker, Helm, Jenkins, Git Terraform or CloudFormation MongoDB Atlas (Nice to have) Monitoring: Prometheus, Datadog Why Join Us? Competitive pay (up to 22 LPA) Opportunity to work on cutting-edge cloud-native tech Fast-paced, agile environment Immediate onboarding
Posted 1 month ago
3.0 - 8.0 years
9 - 18 Lacs
Hyderabad
Hybrid
Data Engineer with Python development experience Experience: 3+ Years Mode: Hybrid (2-3 days/week) Location: Hyderabad Key Responsibilities Develop, test, and deploy data processing pipelines using AWS Serverless technologies such as AWS Lambda, Step Functions, DynamoDB, and S3. Implement ETL processes to transform and process structured and unstructured data eiciently. Collaborate with business analysts and other developers to understand requirements and deliver solutions that meet business needs. Write clean, maintainable, and well-documented code following best practices. Monitor and optimize the performance and cost of serverless applications. Ensure high availability and reliability of the pipeline through proper design and error handling mechanisms. Troubleshoot and debug issues in serverless applications and data workows. Stay up-to-date with emerging technologies in the AWS and serverless ecosystem to recommend improvements. Required Skills and Experience 3-5 years of hands-on Python development experience, including experience with libraries like boto3, Pandas, or similar tools for data processing. Strong knowledge of AWS services, especially Lambda, S3, DynamoDB, Step Functions, SNS, SQS, and API Gateway. Experience building data pipelines or workows to process and transform large datasets. Familiarity with serverless architecture and event-driven programming. Knowledge of best practices for designing secure and scalable serverless applications. Prociency in version control systems (e.g., Git) and collaboration tools. Understanding of CI/CD pipelines and DevOps practices. Strong debugging and problem-solving skills. Familiarity with database systems, both SQL (e.g., RDS) and NoSQL (e.g., DynamoDB). Preferred Qualications AWS certications (e.g., AWS Certied Developer Associate or AWS Certied Solutions Architect Associate). Familiarity with testing frameworks (e.g., pytest) and ensuring test coverage for Python applications. Experience with Infrastructure as Code (IaC) tools such as AWS CDK, CloudFormation. Knowledge of monitoring and logging tools . Apply for Position
Posted 1 month ago
6.0 - 10.0 years
22 - 25 Lacs
Bengaluru
Work from Office
Proficiency in Python, SQL, data transformation and scripting. Experience with data pipeline and workflow tools such as Airflow, Apache, Airflow, Flyte, Argo Hands on experience with Spark/ PySpark, Docker and Kubernetes Strong experience with relational databases (e.g., SQL Server, PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Expertise in cloud data platforms such as AWS (Glue, Redshift, S3), Azure (Data Factory, Synapse), or GCP (BigQuery, Dataflow).
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Pune
Work from Office
Hello Visionary! We empower our people to stay resilient and relevant in a constantly changing world. Were looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like youd make a great addition to our vibrant team. Siemens founded the new business unit Siemens Foundational Technologies (formerly known as Siemens IoT Services) on April 1, 2019 with its headquarter in Munich, Germany. It has been crafted to unlock the digital future of its clients by offering end-to-end support on their outstanding digitalization journey. Siemens Foundational Technologies is a strategic advisor and a trusted implementation partner in digital transformation and industrial IoT with a global network of more than 8000 employees in 10 countries and 21 offices. Highly skilled and experienced specialists offer services which range from consulting to craft & prototyping to solution & implementation and operation everything out of one hand. We are looking for a Senior DevOps Engineer Youll make a difference by: Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitLab, including configuring GitLab Runners. Build, manage, and scale containerized applications using Docker, Kubernetes, and HELM. Automate infrastructure provisioning and management with Terraform. Manage and optimize cloud-based environments, especially AWS. Administer and optimize Kafka clusters for data streaming and processing. Oversee the performance and reliability of databases and Linux environments. Monitor and enhance system health using tools like Prometheus and Grafana. Collaborate with cross-functional teams to implement DevOps best practices. Ensure system security, scalability, and disaster recovery readiness. Troubleshoot and resolve technical issues across the infrastructure. Required Skills & Qualifications: 6 - 8 years of experience in DevOps, system administration, or a related role. Expertise in CI/CD tools and workflows, especially GitLab Pipelines and GitLab Runners. Proficient in containerization and orchestration tools like Docker, Kubernetes, and HELM. Strong hands-on experience with Docker Swarm, including creating and managing Docker clusters. Proficiency in packaging Docker images for deployment. Strong hands-on experience with Kubernetes, including managing clusters and deploying applications. Strong hands-on experience with Terraform for Infrastructure as Code (IaC). In-depth knowledge of AWS services, including EC2, S3, IAM, EKS, MSK, Route53 and VPC. Solid experience in managing and maintaining Kafka ecosystems. Strong Linux system administration skills. Proficiency in database management, optimization, and troubleshooting. Experience with monitoring tools like Prometheus and Grafana. Excellent scripting skills in languages like Bash, Python. Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication skills and a collaborative mindset. Good to Have Skills: Experience with Keycloak for identity and access management. Familiarity with Nginx or Traefik for reverse proxy and load balancing. Hands-on experience in PostgreSQL maintenance, including backups, tuning, and troubleshooting. Knowledge of the railway domain, including industry-specific challenges and standards. Experience in implementing and managing high-availability architectures. Exposure to distributed systems and microservices architecture. Desired Skills: 5-8 years of experience is required. Great Communication skills. Analytical and problem-solving skills This role is based in Pune and is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
We are looking for a Cloud Application Developer. Youll make a difference by: Having proficiency in development of backend(python) and smaller frontend applications(angular) using on our AWS-based managed cloud environment. Having hands-on experience with Contribution to development of larger composite applications. Having Knowledge in Operating and troubleshooting existing applications. Having ability to Break down and implement high level concepts created by architects and PO. Having exposure of AWS cloud platform services (Lambda, ECS, S3, RDS/DynamoDB), certification is a plus. Having Practical experience in setting up and maintaining CI/CD pipelines, containerization (Docker), and version control systems (Git). Having Proficiency in full-stack technologies (front-end Angular), databases, APIs) Youll win us over by: Holding a graduate BE / B.Tech / MCA/M.Tech/M.Sc with good academic record. 5+ Years of Experience in Software development, with a focus on Python. Familiar with agile development processes and principles. Optional skills: C#, Kubernetes, DevOps engineering, Infrastructure-as-Code
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Pune
Work from Office
Youll make a difference by: Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitLab, including configuring GitLab Runners. Build, manage, and scale containerized applications using Docker, Kubernetes, and HELM. Automate infrastructure provisioning and management with Terraform. Manage and optimize cloud-based environments, especially AWS. Administer and optimize Kafka clusters for data streaming and processing. Oversee the performance and reliability of databases and Linux environments. Monitor and enhance system health using tools like Prometheus and Grafana. Collaborate with cross-functional teams to implement DevOps best practices. Ensure system security, scalability, and disaster recovery readiness. Troubleshoot and resolve technical issues across the infrastructure. Required Skills & Qualifications: 3 - 5 years of experience in DevOps, system administration, or a related role. Expertise in CI/CD tools and workflows, especially GitLab Pipelines and GitLab Runners. Proficient in containerization and orchestration tools like Docker, Kubernetes, and HELM. Strong hands-on experience with Docker Swarm, including creating and managing Docker clusters. Proficiency in packaging Docker images for deployment. Strong hands-on experience with Kubernetes, including managing clusters and deploying applications. Strong hands-on experience with Terraform for Infrastructure as Code (IaC). In-depth knowledge of AWS services, including EC2, S3, IAM, EKS, MSK, Route53 and VPC. Solid experience in managing and maintaining Kafka ecosystems. Strong Linux system administration skills. Proficiency in database management, optimization, and troubleshooting. Experience with monitoring tools like Prometheus and Grafana. Excellent scripting skills in languages like Bash, Python. Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication skills and a collaborative mindset. Good to Have Skills: Experience with Keycloak for identity and access management. Familiarity with Nginx or Traefik for reverse proxy and load balancing. Hands-on experience in PostgreSQL maintenance, including backups, tuning, and troubleshooting. Knowledge of the railway domain, including industry-specific challenges and standards. Experience in implementing and managing high-availability architectures. Exposure to distributed systems and microservices architecture. Desired Skills: 3-5 years of experience is required. Great Communication skills. Analytical and problem-solving skills Find out more about Siemens careers at: & more about mobility at
Posted 1 month ago
6 - 10 years
13 - 18 Lacs
Hyderabad
Remote
Hi Everyone Greetings from Intuition IT Global Recruitment Firm We hav an Exciting Job opportunity for Devops with AI Platform and Data science with our leading Client Location PAN India (Remote) Job Type: Long term Contract Job Description : Support Platform which offers infrastructure to Data science/Data Analytics/MLOps teams Resolution of issues for provisioning of new use cases in AI Platform Resolution of incidents and services requests related to AI Platform Collaborate with IAM teams for accounts provisioning Co-ordinate with other teams from Platform - AWS, Snowflake, Data bricks etc Monitor CI/CD pipelines in AI Platform Proficient in tools like AWS ( IAM, S3, EKS, SAGEMAKER, ACM, ECR, RDS, Secrets Manager, Lambda, Step Functions) DevOps tools - Jenkins, Bitbucket, Jfrog, SonarQube, Checkmarx, Kubernetes, Docker etc responsibilities Please share cv to this email id : maheshwari.p@intuition-IT.com Preferred candidate profile
Posted 1 month ago
10 - 15 years
15 - 25 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
Experience: 10+ Years Job Description: Role Overview: We are seeking an experienced AWS Data & Analytics Architect with a strong background in delivery and excellent communication skills. The ideal candidate will have over 10 years of experience and a proven track record in managing teams and client relationships. You will be responsible for leading data modernization and transformation projects using AWS services. Key Responsibilities: Lead and architect data modernization/transformation projects using AWS services. Manage and mentor a team of data engineers and analysts. Build and maintain strong client relationships, ensuring successful project delivery. Design and implement scalable data architectures and solutions. Oversee the migration of large datasets to AWS, ensuring data integrity and security. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Ensure best practices in data management and governance are followed. Required Skills and Experience: 10+ years of experience in data architecture and analytics. Hands-on experience with AWS services such as Redshift, S3, Glue, Lambda, RDS, and others. Proven experience in delivering 1-2 large data migration/modernization projects using AWS. Strong leadership and team management skills. Excellent communication and interpersonal skills. Deep understanding of data modeling, ETL processes, and data warehousing. Experience with data governance and security best practices. Ability to work in a fast-paced, dynamic environment. Preferred Qualifications: AWS Certified Solutions Architect Professional or AWS Certified Big Data Specialty. Experience with other cloud platforms (e.g., Azure, GCP) is a plus. Familiarity with machine learning and AI technologies.
Posted 1 month ago
1 - 4 years
6 - 10 Lacs
Bengaluru
Work from Office
What Youll Own Full Stack Systems: Architect and build end-to-end applications using Flask, FastAPI, Node.js, React (or Next.js), and Tailwind. AI Integrations: Build and optimize pipelines involving LLMs (OpenAI, Groq, LLaMA), Whisper, TTS, embeddings, RAG, LangChain, LangGraph, and vector DBs like Pinecone/Milvus. Cloud Infrastructure: Deploy, monitor, and scale systems on AWS/GCP using EC2, S3, IAM, Lambda, Kafka, and ClickHouse. Real-time Systems: Design asynchronous workflows (Kafka, Celery, WebSockets) for voice-based agents, event tracking, or search indexing. System Orchestration: Set up scalable infra with autoscaling groups, Docker, and Kubernetes (PoC ready, if not full prod). Growth-Ready Features: Implement in-app nudges, tracking with Amplitude, AB testing, and funnel optimization. Must-Haves: 1+ years of experience building production-grade full-stack systems Fluency in Python and JS/TS (Node.js, React) shipping independently without handholding Deep understanding of LLM pipelines, embeddings, vector search, and retrieval-augmented generation (RAG) Experience with AR frameworks (ARKit, ARCore), 3D rendering (Three.js), and real-time computer vision (MediaPipe) Strong grasp of modern AI model architectures: Diffusion Models, GANs, AI Agent Hands-on with system debugging, performance profiling, infra cost optimization Comfort with ambiguity fast iteration, shipping prototypes, breaking things to learn faster Bonus if youve built agentic apps, AI workflows, or virtual try-ons
Posted 1 month ago
7 - 10 years
9 - 12 Lacs
Mumbai, Maharastra
Work from Office
About the Role: Grade Level (for internal use): 10 S&P Global Dow Jones Indices The Role Senior Development Engineer Python Full Stack S&P Dow Jones Indices a global leader in providing investable and benchmark indices to the financial markets, is looking for a Senior Development Engineer with full stack experience to join our technology team. This is mostly a back-end development role but will also support UI development work. The Team : You will be part of global technology team comprising of Dev, QA and BA teams and will be responsible for analysis, design, development and testing. Responsibilities and Impact You will be working on one of the key systems that is responsible for calculating re-balancing weights and asset selections for S&P indices. Ultimately, the output of this team is used to maintain some of the most recognized and important investable assets globally. Development of RESTful web services and databases; supporting UI development requirements. Interfacing with various AWS infrastructure and services, deploying to Docker environment. Coding, Documentation, Testing, Debugging, Documentation and tier-3 support. Work directly with stakeholders and technical architect to formalize/document requirements for both supporting existing application as well as new initiatives. Perform Application & System Performance tuning and troubleshoot performance issues. Coordinately closely with the QA team and the scrum master to optimize team velocity and task flow. Helps establish and maintain technical standards via code reviews and pull requests Whats in it for you This is an opportunity to work on a team of highly talented and motivated engineers at a highly respected company. You will work on new development as well as enhancements to existing functionality. What Were Looking For: Basic Qualifications 7 - 10 years of IT experience in application development and support, primarily in a back-end API and database development roles with at least some UI development experience. Bachelor's degree in Computer Science, Information Systems, Engineering or, or in lieu, a demonstrated equivalence in work experience. Proficiency in modern Python 3.10+ (minimum 4 years dedicated, recent Python experience) AWS services experience including API Gateway, ECS / Docker, DynamoDB, S3, Kafka, SQS. SQL database experience, with at least 1 year of Postgres. Python libraries experience including Pydantic, SQLAlchemy and at least one of (Flask, FastAPI, Sanic), focusing on creating RESTful endpoints for data services. JavaScript / Typescript experience and at least one of (Vue 3, React, Angular) Strong unit testing skills with PyTest or UnitTest, and API testing using Postman or Bruno. CI/CD build process experience using Jenkins. Experience with software testing (unit testing, integration testing, test driven development). Strong Work Ethic and good communication skills. Additional Preferred Qualifications : Basic understanding of financial markets (stocks, funds, indices, etc.) Experience working in mission-critical enterprise organizations A passion for creating high quality code and broad unit test coverage. Ability to understand complex business problems, break into smaller executable parts, and delegate.
Posted 1 month ago
4 - 9 years
6 - 11 Lacs
Noida, Uttarpradesh
Work from Office
The Team We are seeking an experienced, technically strong TechOps Engineer to join a global team that delivers specialized technical support of major product deliverables and availability of our products. This role focuses on continuously improving support processes through collaboration centralizing on proactive, technical and innovative engagements across business stakeholders, operations and development teams. Responsibilities Apply strong technical skills and good business knowledge together with investigative techniques to identify and resolve issues efficiently and in a timely manner Immerse in the business domain, identify and implement innovative solutions and technologies that enhance system and application monitoring Demonstrate excellent communication skills valuable for managing service incidents and working collaboratively with other teams Implement and monitor system alerts for early detection and mitigation of potential service incidents Involve and contribute solutions that address system and application vulnerabilities Constantly coordinate with product and development teams to ensure support readiness of new releases and enhancements Work on tooling, solutions and automation of operational support functions Education and experience University Graduate with Bachelors Degree in Computer Science or Computer Engineering related degree. Exp - 4+ years of exp Extensive experience in an Application Support role Experience working on AWS cloud technologies (Lambda, SQS, SNS, S3, Dynamo DB, Step Functions, EC2, Fargate etc.) Must be knowledgeable in SDLC and experience in raising development bugs including priority assessment, high quality analysis, and detailed investigation Must have fundamental working knowledge of RDBMS (Oracle, SQL server and RDS) including stored procs, complex joins, database query plan analysis and monitoring Broad knowledge of server administration across different operating systems such as Linux and Windows technologies. Good shell scripting experience is a must. Ability to use python scripting is an advantage Ideally familiar with monitoring tools such as DataDog, PagerDuty, Splunk and Centreon Demonstrable experience of working on highly transactional, available and scalable business critical systems Good understanding of software architecture; understanding component and application breakdown and interaction Commercial awareness Knowledge of, or experience of working in the financial services industry would be a plus Excellent understanding of software systems and technology Good understanding of software support team functions and a solid understanding of the end to end application development process. A strong desire to keep up with all of the latest developments in related technologies Personal competencies Confident individual who can represent the team at various levels. Excellent analytical and problem-solving skills Ability to carry out business impact analysis and prioritize tasks according to severity and importance Communication Must be a strong communicator both written and verbally in English Excellent listening, presentation and interpersonal skills Ability to communicate ideas in both technical and user-friendly languages Teamwork Ideal candidate is a self-starter capable of working independently as well as contribute to the teams requirements Be able to work flexible shift hours including weekends to meet work requirements and project deadlines
Posted 1 month ago
6 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.
Posted 1 month ago
5 - 8 years
10 - 15 Lacs
Chennai, Bengaluru
Work from Office
Hiring: DevOps Engineer Immediate Joiners Location: Offshore (Chennai / Bangalore preferred) Experience: 5+ Years We’re looking for a DevOps Engineer to support our web and mobile dev teams with CI/CD, GitLab, and automation tooling. Key Skills: GitLab CI/CD, Docker, Terraform Kubernetes (Rancher a plus), Helm, Bash/NodeJS scripting AWS, S3, Infra-as-Code Mobile DevOps exposure, iOS tooling, JFrog Artifactory Agile experience, strong troubleshooting skills Join us immediately! Send your resume or DM now. #DevOps #ImmediateJoiner #GitLab #Terraform #Docker #AWS #HiringNow #ChennaiJobs #BangaloreJobs
Posted 1 month ago
6 - 11 years
15 - 30 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Position: Senior AWS Data Engineer - Interested candidates can send their resumes to heena.ruchwani@gspann.com Experience: 6+ Years Locations: Pune, Hyderabad, Gurugram, Bangalore Notice Period: Immediate to 30 Days Preferred Job Description: We are hiring a Senior AWS Data Engineer to join our growing team. The ideal candidate will have deep expertise in AWS data services, strong ETL experience, and a passion for solving complex data problems at scale. Key Responsibilities: Design and develop scalable, high-performance data pipelines in AWS Work with services like Glue, Redshift, S3, EMR, Lambda, and Athena Build and optimize ETL processes for both structured and unstructured data Collaborate with cross-functional teams to deliver actionable data solutions Implement best practices for data quality, security, and cost-efficiency Required Skills: 6+ years in Data Engineering 3+ years working with AWS (Glue, S3, Redshift, Lambda, EMR, etc.) Proficient in Python or Scala for data transformation Strong SQL skills and experience in performance tuning Hands-on experience with Spark or PySpark Knowledge of data lake and DWH architecture Nice to Have: Familiarity with Kafka, Kinesis, or real-time data streaming Exposure to Terraform or CloudFormation Experience with CI/CD tools like Git and Jenkins How to Apply: Interested candidates can send their resumes to heena.ruchwani@gspann.com
Posted 1 month ago
10 - 12 years
35 - 45 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Work from Office
Expert-level exp in backend development using .NetCore, C# & EF Core. Strong expertise in PostgreSQL & efficient database design. Proficient in building & maintaining RESTful APIs at scale. Strong frontend dev exp with ReactJS, JavaScript, TypeScript Required Candidate profile Proficiency in HTML5, CSS3, and responsive design best practices. Hands-on experience with AWS Cloud Services, specifically designing systems with SNS, SQS, EC2, Lambda, and S3.
Posted 1 month ago
10 - 15 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Experience: 10 Years Work Type: Onsite Budget: As per market standards Primary Skills NodeJS 6+ years of hands-on backend development JavaScript / HTML / CSS Strong frontend development capabilities ReactJS / VueJS Working knowledge or project experience preferred AWS Serverless Architecture Mandatory (Lambda, API Gateway, S3) LLM Integration / AI Development Experience with OpenAI, Anthropic APIs Prompt Engineering Context management and token optimization SQL / NoSQL Databases Solid experience with relational & non-relational DBs EndToEnd Deployment Deploy, debug, and manage full-stack apps Clean Code Writes clean, maintainable, production-ready code Secondary Skills Amazon Bedrock Familiarity is a strong plus Web Servers Experience with Nginx / Apache configuration RAG Patterns / VectorDBs / AIAgents Bonus experience Software Engineering Best Practices Strong design & architecture skills CI/CD / DevOps Exposure Beneficial for full pipeline integration Expectations Own frontend and backend development Collaborate closely with engineering and client teams Build scalable, secure, and intelligent systems Influence architecture and tech stack decisions Stay up-to-date with AI trends and serverless best practices
Posted 1 month ago
5 - 9 years
15 - 18 Lacs
Mumbai, Pune, Bengaluru
Work from Office
Description:Hands-on experience with AWS services including S3, Lambda,Glue, API Gateway, and SQS.Strong skills in data engineering on AWS, with proficiency in Python ,pyspark & SQL.Experience with batch job scheduling and managing data dependencies.Knowledge of data processing tools like Spark and Airflow.Automate repetitive tasks and build reusable frameworks to improve efficiency.Provide Run/DevOps support and manage the ongoing operation of data services. Location - Bangalore, Mumbai, Pune, Chennai, Kolkata, Hyderabad
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane