Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
As an AWS Dataops Lead at Birlasoft, you will be responsible for configuring, deploying, monitoring, and managing AWS data platforms. Your role will involve managing data flows and dispositions in S3, Snowflake, and Postgres. You will also be in charge of user access and authentication on AWS, ensuring proper resource provisioning, security, and compliance. Your experience in GitHub integration will be valuable in this role. Additionally, familiarity with AWS native tools like Glue, Glue Catalog, CloudWatch, and CloudFormation (or Terraform) will be essential. You will also play a key part in assisting with backup and disaster recovery processes. Join our team and be a part of Birlasoft's commitment to leveraging Cloud, AI, and Digital technologies to empower societies worldwide and enhance business efficiency and productivity. With over 12,000 professionals and a rich heritage spanning 170 years, we are dedicated to building sustainable communities and driving innovation through our consultative and design-thinking approach.,
Posted 1 month ago
3.0 - 5.0 years
5 - 13 Lacs
Pune
Work from Office
Role & responsibilities Exp: 3 to 5 years Senior Backend Software Engineer Job Summary: This Backend Developer position involves designing, building, and maintaining scalable backend systems in AWS cloud services while following full life cycle of software development. The software development activity includes requirement specification, design, implementation, testing, manufacturing support, and problem investigation of field reported issues. Responsibilities: 1. Design and develop scalable backend services and APIs using modern programming languages 2. Build and maintain microservices architecture on AWS cloud platform 3. Develop serverless applications using AWS Lambda, API Gateway, and other managed services 4. Design and optimize database schemas for both SQL and NoSQL databases 5. Deploy and manage applications using AWS services including EC2, ECS, EKS, and Lambda 6. Manage containerized applications with Docker and Kubernetes on EKS 7. Develop software design specification that are tracible to requirement specification in accordance with the development process. 8. Perform required design testing including unit testing, integration testing, performance testing, and reliability testing. 9. Implement logging strategies and troubleshoot production issues 10. Optimize application performance and scalability based on metrics and user feedback Minimum Qualifications: • Degree in Electrical or Computer Engineering, Computer Science or a Technology Diploma with relevant industry experience in full-stack software development. • Work well individually and in a team environment. • Ability to work in a fast paced and agile development environment with measurable results • Effective written and verbal communication skills • Effective problem-solving skills. • 4-5 years of experience in two or more of the following areas: Excellent Proficiency in Java programming Hands-on experience with core AWS services including: Compute: EC2, Lambda, ECS/EKS Storage: S3, EBS, EFS Database: RDS, DynamoDB Networking: VPC, CloudFront, Route 53 Monitoring: CloudWatch Experience with both relational (MySQL) and NoSQL (DynamoDB, Redis) databases Experience with containerization technologies (Docker, Kubernetes) Understanding of CI/CD principles and tools Familiarity with message queues and event-driven architectures (SQS, SNS, EventBridge)
Posted 1 month ago
3.0 - 5.0 years
5 - 8 Lacs
Ahmedabad
Work from Office
We are seeking a certified and experienced AWS & Linux Administrator to support the infrastructure of SAP ECC systems running on Oracle databases in AWS. The role demands expertise in AWS services, enterprise Linux (RHEL/SLES), and experience supporting SAP ECC and Oracle in mission-critical environments. This position will have a work shift that aligns with US Pacific Time, which is 12:30 hours behind India Time. Key Responsibilities Deploy, configure, and maintain Linux servers (RHEL/SLES) on AWS EC2 for SAP ECC and Oracle. Administer and monitor SAP ECC infrastructure and Oracle DB back-end, ensuring high availability and performance. Design and manage AWS infrastructure using EC2, EBS, VPC, IAM, S3, CloudWatch, and Backup services. Collaborate with SAP Basis and Oracle DBA teams to manage system patching, tuning, and upgrades. Implement backup and disaster recovery strategies for SAP and Oracle in AWS. Automate routine tasks using Shell scripts, Ansible, or AWS Systems Manager. Ensure security, compliance, and system hardening of SAP ECC and Oracle landscape. Support system refreshes, migrations, and environment cloning. Troubleshoot infrastructure-related incidents affecting SAP or Oracle availability. Minimum Qualifications: AWS Certified SysOps Administrator Associate (or higher AWS certification). Linux Certification: Red Hat RHCSA/RHCE or SUSE Certified Administrator. 5+ years experience managing Linux systems in enterprise or cloud environments. 3+ years of hands-on AWS infrastructure administration. Solid understanding of Oracle DB administration basics in SAP contexts (e.g., listener setup, tablespaces, logs). Preferred Skills Knowledge of SAP ECC on Oracle deployment architecture. Experience managing Oracle on AWS using EC2 Familiarity with SAP Notes, SAP EarlyWatch reports, and SAP/Oracle performance tuning. Understanding of hybrid connectivity, such as VPN/Direct Connect between on-prem and AWS. Hands-on with AWS CloudFormation, Terraform, or automation pipelines for infrastructure deployment. Soft Skills Analytical thinking with attention to root-cause analysis. Strong communication and documentation skills. Ability to coordinate across SAP, DBA, and DevOps teams. Flexibility to provide off-hours support as required.
Posted 1 month ago
2.0 - 5.0 years
0 - 0 Lacs
Nagpur
Remote
Key Responsibilities: Provision and manage GPU-based EC2 instances for training and inference workloads. Configure and maintain EBS volumes and Amazon S3 buckets (versioning, lifecycle policies, multipart uploads) to handle large video and image datasets . Build, containerize, and deploy ML workloads using Docker and push images to ECR . Manage container deployment using Lambda , ECS , or AWS Batch for video inference jobs. Monitor and optimize cloud infrastructure using CloudWatch, Auto Scaling Groups , and Spot Instances to ensure cost efficiency. Set up and enforce IAM roles and permissions for secure access control across services. Collaborate with the AI/ML, annotation, and backend teams to streamline cloud-to-model pipelines. Automate cloud workflows and deployment pipelines using GitHub Actions , Jenkins , or similar CI/CD tools. Maintain logs, alerts, and system metrics for performance tuning and auditing. Required Skills: Cloud & Infrastructure: AWS Services : EC2 (GPU), S3, EBS, ECR, Lambda, Batch, CloudWatch, IAM Data Management : Large file transfer, S3 Multipart Uploads, storage lifecycle configuration, archive policies (Glacier/IA) Security & Access : IAM Policies, Roles, Access Keys, VPC (preferred) DevOps & Automation: Tools : Docker, GitHub Actions, Jenkins, Terraform (bonus) Scripting : Python, Shell scripting for automation & monitoring CI/CD : Experience in building and managing pipelines for model and API deployments ML/AI Environment Understanding: Familiarity with GPU-based ML workloads Knowledge of model training, inference architecture (batch and real-time) Experience with containerized ML model execution is a plus Preferred Qualifications: 2-5 years of experience in DevOps or Cloud Infrastructure roles AWS Associate/Professional Certification (DevOps/Architect) is a plus Experience in managing data-heavy pipelines , such as drones, surveillance, or video AI systems
Posted 1 month ago
8.0 - 10.0 years
14 - 18 Lacs
Jaipur
Work from Office
About the Role We are looking for a highly skilled and experienced Frontend JavaScript Developer for position of Principal Software Engineer who can lead the development and design of high-performance frontend architectures. In this role, you will take ownership of frontend systems, establish scalable components and design patterns, and collaborate across teams to ensure cohesive, secure, and performant product delivery. The ideal candidate is someone who can architect complex frontend systems, has a deep understanding of browser rendering, code bundling, optimization strategies, and modern state management, and can also guide junior developers to grow in both technical and collaborative aspects. Candidates with exposure to backend fundamentals using Node.js , Express.js , and WebSocket-based real-time communication are highly preferred for seamless cross-functional collaboration. About Auriga IT Auriga IT is a digital solutions company founded in 2010 by an IIT Roorkee alumnus, based in Jaipur, India. It serves as a digital solutions partner for startups, corporates, government entities, and unicorns. Auriga IT focuses on design, technology, and data capabilities to help organizations launch new businesses and drive digital transformation. Key Responsibilities Lead the architectural design and implementation of scalable, reusable frontend applications using React.js and Next.js Define and implement frontend architecture flows, maintainable design systems, and component libraries Establish and enforce coding standards, performance budgets, and testing strategies Optimize applications for high performance and scalability, focusing on Core Web Vitals, bundle size reduction, and runtime performance Integrate secure practices: CSP, secure token flows, input validation, XSS/CSRF protections Guide the use of state management libraries (Redux Toolkit, Zustand, React Query) based on use case suitability Collaborate with DevOps and backend teams on API integrations, WebSocket implementation (e.g., Socket.io), deployment, and system health Drive CI/CD processes using tools like GitHub Actions, Jenkins, Docker, and Vite/Webpack/Grunt/Gulp Participate in code reviews and mentor junior developers to build both technical and product understanding Conduct root-cause analysis and production-level debugging for critical issues across environments Coordinate with cross-functional teams, including QA, backend, DevOps, product, and design Required Skills and Qualifications: Strong command of: React.js , Next.js JavaScript (ES6+) and TypeScript HTML5 , CSS3 , Tailwind CSS , Styled Components , or Bootstrap Proven experience in: Designing modular component-based architecture SSR, ISR, SSG patterns in Next.js Modern state management (Redux Toolkit, Zustand, React Query) RESTful API consumption and error handling Application security best practices (OAuth2, JWT, XSS/CSRF protection) Performance optimization (code splitting, dynamic imports, lazy loading, etc.) Dev tooling: Chrome DevTools, Lighthouse, Web Vitals, source map analysis Hands-on exposure to: CI/CD (GitHub Actions, Jenkins) Webpack/Vite bundling Git branching, GitHub PRs, version control standards Testing frameworks: Jest/Cypress. Strong foundation in debugging production issues, analyzing frontend logs, and performance bottlenecks Experience in building or maintaining design systems using tools like Storybook Ability to translate product vision into long-term frontend architecture plans Experience working in Agile teams and leading sprint activities Ensure accessibility compliance (a11y), semantic HTML and SEO optimization for frontend delivery Familiarity with AWS tools such as S3 , CloudFront , Lambda, Load Balancing and EC2 Knowledge of GraphQL, Design patterns, and caching layers Good to Have Working knowledge of backend tools and APIs using and Express.js Exposure to Vue.js , SvelteKit or other modern JS frameworks Understanding of micro frontends and federated module architecture Familiarity with infrastructure as code (Terraform, Pulumi - optional) Awareness of observability and monitoring tools like Sentry , LogRocket or Datadog Working knowledge of Docker-based local environments Contributions to documentation, technical blogs or internal tooling Experience with feature flags, A/B testing tools, or experiment-driven development
Posted 1 month ago
4.0 - 9.0 years
8 - 12 Lacs
Mumbai
Work from Office
We are looking for Role: AWS Infrastructure Engineer Experience: 4+ yrs Job Location: Bavdhan, Pune Work Mode: Remote Job Description: Skilled AWS Infrastructure Engineer with expertise in AWS services, Linux, and Windows systems. The ideal candidate will design, deploy, and manage scalable, secure cloud infrastructure while supporting hybrid environment. Key Responsibilities: Hands-on experience with multi-cloud environments (e.g., Azure, AWS, GCP) Design and maintain AWS infrastructure (EC2, S3, VPC, RDS, IAM, Lambda and other AWS services). Implement security best practices (IAM, GuardDuty, Security Hub, WAF). Configure and troubleshoot AWS networking and hybrid and url filtering solutions (VPC, TGW, Route 53, VPNs, Direct Connect). Experience managing physical firewall management (palo alto , cisco etc..) Manage , troubleshoot, Configure and optimize services like Apache, NGINX, and MySQL/PostgreSQL on Linux/Windows/ Ensure Linux/Windows server compliance with patch management and security updates. Provide L2/L3 support for Linux and Windows systems, ensuring minimal downtime and quick resolution of incidents Collaborate with DevOps, application, and database teams to ensure seamless integration of infrastructure solutions Automate tasks using Terraform, CloudFormation, or scripting (Bash, Python). Monitor and optimize cloud resources using CloudWatch, Trusted Advisor, and Cost Explorer Requirements: 4+ years of AWS experience and system administration in Linux & Windows. Proficiency in AWS networking, security, and automation tools. Certifications: AWS Solutions Architect (required), RHCSA/MCSE (preferred). Strong communication and problem-solving skills webserver - apache2 , nginx , IIS, OS- ubuntu , windows server certification : AWS solution architected associate level -- Muugddha Vanjarii 7822804824 mugdha.vanjari@sunbrilotechnologies.com
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a member of Zendesk's engineering team in Australia, your main objective is to improve the customer experience by developing products that cater to over 170,000 global brands. These brands, including Discord, Calm, and Skyscanner, rely on Zendesk's solutions to ensure customer satisfaction on a daily basis. Working within a highly innovative and fast-paced environment, you will have the opportunity to collaborate with a diverse group of individuals from around the world, contributing to the success of some of Zendesk's most beloved products. This position is a hybrid role that combines remote work with on-site requirements, necessitating three days in the office and relocation to Pune. You will be part of a dynamic team focused on creating distributed, high-scale, and data-intensive integrations that enhance Zendesk's core SaaS product. Collaborating with other SaaS providers and cloud vendors such as Slack, Atlassian, and AWS, you will be involved in incorporating cutting-edge technologies and features to deliver top-notch solutions to customers. Your daily responsibilities will include designing, leading, and implementing customer-facing software projects, emphasizing the importance of good software practices and timely project delivery. Excellent communication skills, attention to detail, and the ability to influence others diplomatically are essential qualities for this role. Additionally, you will be expected to demonstrate leadership qualities, mentor team members, and consistently apply best practices throughout the development cycle. To excel in this role, you should have a solid background in Golang for high-volume applications, at least 2 years of experience in frontend development using JavaScript and React, and a strong focus on long-term solution viability. Experience in identifying and resolving reliability issues at scale, effective time management, and building integrations with popular SaaS products are also key requirements. The tech stack you will be working with includes Golang, JavaScript/TypeScript, React, Redux, React Testing Library, Cypress, Jest, AWS, Spinnaker, Kubernetes, Aurora/MySQL, DynamoDB, and S3. Please note that candidates must be physically located and willing to work from Karnataka or Maharashtra. In this hybrid role, you will have the opportunity to work both remotely and onsite, fostering connections, collaboration, and learning while maintaining a healthy work-life balance. Zendesk is committed to providing an inclusive and fulfilling environment for its employees, enabling them to thrive in a diverse and supportive workplace.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
maharashtra
On-site
As a Senior Python Engineer at JPMorgan Chase within the AM Research Technology team, you will play a crucial role in an agile team dedicated to enhancing, building, and delivering cutting-edge technology products in a secure, stable, and scalable manner. Your contributions will drive significant business impact and leverage your deep technical expertise to address a wide range of challenges across various technologies and applications, particularly focusing on cloud-based systems and AI-driven solutions. You will be responsible for executing software solutions, designing, developing, and troubleshooting technical issues with a forward-thinking approach to problem-solving. Your role will involve creating secure and high-quality production code, maintaining algorithms that operate seamlessly with different systems, and producing architecture and design artifacts for complex applications while ensuring adherence to design constraints. Additionally, you will analyze and synthesize large, diverse data sets to enhance software applications and systems continuously. Furthermore, your role will involve proactively identifying hidden problems and patterns in data to drive improvements in coding practices and system architecture. You will actively contribute to software engineering communities of practice and participate in events focused on exploring new and emerging technologies while fostering a team culture centered on diversity, equity, inclusion, and respect. To qualify for this position, you should possess formal training or certification in software engineering concepts along with a minimum of 10 years of applied experience. You must have hands-on practical experience in Java and Python development, as well as proficiency in Java frameworks like Spring and Python frameworks like Flask and SQL Alchemy. Experience in using AI/ML frameworks such as Langchain, Langchain4j, and Spring AI is essential, along with practical knowledge of AWS Services like ECS, EKS, and S3. Moreover, you should have a strong background in infrastructure provisioning using Terraform, practical experience in RDBMS like Postgres and NoSql databases like Dynamo DB and Elastic Cache, and expertise in system design, application development, testing, and operational stability. Proficiency in coding in multiple languages, along with extensive experience in developing, debugging, and maintaining code within a large corporate environment, is required. A solid understanding of agile methodologies, including CI/CD, Applicant Resiliency, and Security, is crucial, as well as demonstrated knowledge of software applications and technical processes within technical disciplines such as cloud computing, artificial intelligence, machine learning, and mobile technologies.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As an AWS Cloud Engineer at Talent Worx, you will be responsible for designing, deploying, and managing AWS cloud solutions to meet our organizational objectives. Your expertise in AWS technologies will be crucial in building scalable, secure, and robust cloud architectures that ensure optimal performance and efficiency. Key Responsibilities: - Design, implement, and manage AWS cloud infrastructure following best practices - Develop cloud-based solutions utilizing AWS services such as EC2, S3, Lambda, RDS, and VPC - Automate deployment, scaling, and management of cloud applications using Infrastructure as Code (IaC) tools like AWS CloudFormation and Terraform - Implement security measures and best practices, including IAM, VPC security, and data protection - Monitor system performance, troubleshoot issues, and optimize cloud resources for cost and performance - Collaborate with development teams to set up CI/CD pipelines for streamlined deployment workflows - Conduct cloud cost analysis and optimization to drive efficiency - Stay updated on AWS features and industry trends for continuous innovation and improvement Required Skills and Qualifications: - 3+ years of experience in cloud engineering or related field, focusing on AWS technologies - Proficiency in AWS services like EC2, S3, EBS, RDS, Lambda, and CloudFormation - Experience with scripting and programming languages (e.g., Python, Bash) for automation - Strong understanding of networking concepts (VPC, subnetting, NAT, VPN) - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes - Knowledge of DevOps principles, CI/CD tools, and practices - AWS certification (e.g., AWS Certified Solutions Architect, AWS Certified Developer) preferred - Analytical skills, attention to detail, and excellent communication abilities - Bachelor's degree in Computer Science, Information Technology, or related field Join Talent Worx and enjoy benefits like global exposure, accelerated career growth, diverse learning opportunities, collaborative culture, cross-functional mobility, access to cutting-edge tools, a focus on purpose and impact, and mentorship for leadership development.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a Senior Python Data Application Developer with a strong expertise in core Python and data-focused libraries. Your primary responsibility is to design, develop, and maintain data-driven applications optimized for performance and scalability. You will be building robust data pipelines, ETL processes, and APIs for integrating various data sources efficiently within the cloud environment. In this role, you will work on AWS using serverless and microservices architectures, utilizing services such as AWS Lambda, API Gateway, S3, DynamoDB, Kinesis, and other AWS tools as required. Collaboration with cross-functional teams is essential to deliver feature-rich applications that meet business requirements. You will apply software design principles and best practices to ensure applications are maintainable, modular, and highly testable. Your tasks will also involve setting up monitoring solutions to proactively monitor application performance, detect anomalies, and resolve issues. Optimizing data applications for cost, performance, and reliability on AWS is a crucial aspect of your role. To excel in this position, you should have at least 5 years of professional experience in data-focused application development using Python. Proficiency in core Python and data libraries such as Pandas, NumPy, and PySpark is required. You must possess a strong understanding of AWS services like ECS, Lambda, API Gateway, S3, DynamoDB, Kinesis, etc. Experience with building highly distributed and scalable solutions via serverless, micro-service, and service-oriented architecture is essential. Furthermore, you should be familiar with unit test frameworks, code quality tools, and CI/CD practices. Knowledge of database management, ORM concepts, and experience with both relational (PostgreSQL, MySQL) and NoSQL (DynamoDB) databases is desired. An understanding of the end-to-end software development lifecycle, Agile methodology, and AWS certification would be advantageous. Strong problem-solving abilities, attention to detail, critical thinking, and excellent communication skills are necessary for effective collaboration with technical and non-technical teams. Mentoring junior developers and contributing to a collaborative team environment are also part of your responsibilities. This is a full-time position located in Bangalore with a hybrid work schedule. If you have proficiency in Pandas, NumPy, and PySpark, along with 5 years of experience in Python, we encourage you to apply and join our team dedicated to developing, optimizing, and deploying scalable data applications supporting company growth and innovation.,
Posted 1 month ago
2.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
The company is hiring for a Python lead role. They are looking for an experienced Python lead to join their dynamic team. The ideal candidate should be proficient in developing robust and scalable solutions in Python using frameworks such as Django, Flask, or Pyramid. Additionally, expertise in AWS services like S3, Lambda, Events, EC2, and Docker Containerization is required. The candidate should have at least 10+ years of experience with a minimum of 2 years as a Tech lead in a Python development team. As a Python lead, your responsibilities will include leading a team of 6-8 Python developers, providing technical guidance, mentorship, and support. You will be responsible for designing and developing high-quality, efficient, and scalable Python applications and APIs using Django or other frameworks. Experience with NoSQL databases (such as PostgreSQL, Mongo) and SQL is essential. Collaboration with cross-functional teams to gather requirements, analyze needs, and develop solutions that meet business objectives is a key aspect of the role. You will also be responsible for ensuring code quality through code reviews, unit testing, and continuous integration. Troubleshooting and resolving issues related to application performance, scalability, and reliability will be part of your daily tasks. Staying updated with the latest trends and technologies in Python development and AWS services and providing recommendations for improvement is expected. Establishing and enforcing development best practices, coding standards, and documentation guidelines within the team is crucial. Furthermore, fostering a collaborative and innovative environment within the team, promoting knowledge sharing, and continuous learning are essential aspects of the role. The company offers a culture of caring where people are prioritized. At GlobalLogic, you will experience an inclusive culture of acceptance and belonging, where you can build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Continuous learning and development opportunities are provided to help you grow personally and professionally. Working on interesting and meaningful projects that have an impact on clients globally is a significant part of the job. The company believes in the importance of balance and flexibility, offering various career areas, roles, and work arrangements to help you achieve a work-life balance. GlobalLogic is a high-trust organization known for its integrity and ethical practices. By joining the company, you become part of a safe, reliable, and ethical global entity where truthfulness, candor, and integrity are valued. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world's largest and most forward-thinking companies. Since 2000, the company has been at the forefront of the digital revolution, creating innovative and widely used digital products and experiences. GlobalLogic collaborates with clients to transform businesses and redefine industries through intelligent products, platforms, and services.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
jodhpur, rajasthan
On-site
The ideal candidate for this position should have the following qualifications and experience: Backend Requirements: - Possess at least 5 years of experience working with Python. - Demonstrated hands-on experience with at least one of the following frameworks: Flask, Django, or FastAPI. - Proficient in utilizing various AWS services, including Lambda, S3, SQS, and CloudFormation. - Skilled in working with relational databases such as PostgreSQL or MySQL. - Familiarity with testing frameworks like Pytest or NoseTest. - Expertise in developing REST APIs and implementing JWT authentication. - Proficient in using version control tools such as Git. Frontend Requirements: - Have a minimum of 3 years of experience with ReactJS. - Thorough understanding of ReactJS and its core principles. - Experience in working with state management tools like Redux Thunk, Redux Saga, or Context API. - Familiarity with RESTful APIs and modern front-end build pipelines and tools. - Proficient in HTML5, CSS3, and pre-processing platforms like SASS/LESS. - Experience in implementing modern authorization mechanisms, such as JSON Web Tokens (JWT). - Knowledge of front-end testing libraries like Cypress, Jest, or React Testing Library. - Bonus points for experience in developing shared component libraries. If you meet the above criteria and are looking to work in a dynamic environment where you can utilize your backend and frontend development skills effectively, we encourage you to apply for this position.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Dynatrace Developer/Consultant, you will be responsible for setting up and maintaining monitoring systems to track the health and performance of data pipelines. Your role will involve configuring alerts and notifications to promptly identify and respond to issues or anomalies in data pipelines. You will develop procedures and playbooks for incident response and resolution, collaborating with data engineers to optimize data flows and processing. Your experience in working with data, ETL, Data warehouse & BI Projects will be invaluable as you continuously monitor and analyze pipeline performance to identify bottlenecks and areas for improvement. Implementing logging mechanisms and error handling strategies will be crucial to capture and analyze pipeline features for quick detection and troubleshooting. Working closely with data engineers and data analysts, you will monitor data quality metrics, delete data anomalies, and develop processes to address data quality issues. Forecasting resource requirements based on data growth and usage patterns will ensure that pipelines can handle increasing data volumes without performance degradation. Developing and maintaining dashboards and reports to visualize key pipeline performance metrics will provide stakeholders with insights into system health and data flow. Automating monitoring tasks and developing tools for streamlined management and observability of data pipelines will be part of your responsibilities. Ensuring data pipeline observability aligns with security and compliance standards, such as data privacy regulations and industry best practices, will be crucial. You will document monitoring processes, best practices, and system configurations, sharing knowledge with team members to improve overall data pipeline reliability and efficiency. Collaborating with cross-functional teams, including data engineers, data scientists, and IT operations, you will troubleshoot issues and implement improvements. Keeping abreast of the latest developments in data pipeline monitoring and observability technologies and practices will enable you to recommend and implement advancements. Knowledge in AWS Glue, S3, Athena is a nice-to-have, along with experience in JIRA and knowledge in any programming language such as Python, Java, or Scala. This is a full-time position with a Monday to Friday schedule and in-person work location.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Manager at Autodesk, you will lead the BI and Data Engineering Team to develop and implement business intelligence solutions. Your role is crucial in empowering decision-makers through trusted data assets and scalable self-serve analytics. You will oversee the design, development, and maintenance of data pipelines, databases, and BI tools to support data-driven decision-making across the CTS organization. Reporting to the leader of the CTS Business Effectiveness department, you will collaborate with stakeholders to define data requirements and objectives. Your responsibilities will include leading and managing a team of data engineers and BI developers, fostering a collaborative team culture, managing data warehouse plans, ensuring data quality, and delivering impactful dashboards and data visualizations. You will also collaborate with stakeholders to translate technical designs into business-appropriate representations, analyze business needs, and create data tools for analytics and BI teams. Staying up to date with data engineering best practices and technologies is essential to ensure the company remains ahead of the industry. To qualify for this role, you should have 3 to 5 years of experience managing data teams and a BA/BS in Data Science, Computer Science, Statistics, Mathematics, or a related field. Proficiency in Snowflake, Python, SQL, Airflow, Git, and big data environments like Hive, Spark, and Presto is required. Experience with workflow management, data transformation tools, and version control systems is preferred. Additionally, familiarity with Power BI, AWS environment, Salesforce, and remote team collaboration is advantageous. The ideal candidate is a data ninja and leader who can derive insights from disparate datasets, understand Customer Success, tell compelling stories using data, and engage business leaders effectively. At Autodesk, we are committed to creating a culture where everyone can thrive and realize their potential. Our values and ways of working help our people succeed, leading to better outcomes for our customers. If you are passionate about shaping the future and making a meaningful impact, join us in our mission to turn innovative ideas into reality. Autodesk offers a competitive compensation package based on experience and location. In addition to base salaries, we provide discretionary annual cash bonuses, commissions, stock grants, and a comprehensive benefits package. If you are interested in a sales career at Autodesk or want to learn more about our commitment to diversity and belonging, please visit our website for more information.,
Posted 1 month ago
6.0 - 11.0 years
18 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities JD for Java + React + AWS Experience: 6 - 10 years Required Skills: Java, Spring, Spring Boot, React, microservices, JMS, ActiveMQ, Tomcat, Maven, GitHub, Jenkins, Linux/Unix, Oracle and PL/SQL, AWS EC2, S3, API Gateway, Lambda, Route53, Secrets Manager, CloudWatch Nice to have skills: Experience with rewriting legacy Java applications using Spring Boot & React Building serverless applications Ocean Shipping domain knowledge AWS CodePipeline Responsibilities: Develop and implement front-end and back-end solutions using Java, Spring, Spring Boot, React, microservices, Oracle and PL/SQL and AWS services. Experience working with business users in defining processes and translating those to technical specifications. Design and develop user-friendly interfaces and ensure seamless integration between front-end and back-end components. Write efficient code following best practices and coding standards. Perform thorough testing and debugging of applications to ensure high-quality deliverables. Optimize application performance and scalability through performance tuning and code optimization techniques. Integrate third-party APIs and services to enhance application functionality. Build serverless applications Deploy applications in AWS environment Perform Code Reviews. Pick up production support engineer role when needed Excellent grasp of application security concerns and remediation techniques. Well-rounded technical background in current web and micro-service technologies. Responsible for being an expert resource for architects in the development of target architectures to ensure that they can be properly designed and implemented through best practices. Should be able to work in a fast paced environment. Stay updated with the latest industry trends and emerging technologies to continuously improve skills and knowledge.
Posted 1 month ago
10.0 - 15.0 years
16 - 27 Lacs
Pune
Work from Office
Dear Candidate, This is with reference for Opportunity for Lead - AWS Data Engineering professionals PFB the Job Description Responsibilities: Lead the design and implementation of AWS-based data architectures and pipelines. Architect and optimize data solutions using AWS services such as S3, Redshift, Glue, EMR, and Lambda. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in AWS data environments. Stay updated on the latest AWS technologies and industry trends. Key Technical Skills & Responsibilities Experience in design and development of cloud data platforms using AWS services Must have experience of design and development of data lake / data warehouse / data analytics solutions using AWS services like S3, Lake Formation, Glue, Athena, EMR, Lambda, Redshift Must be aware about the AWS access control and data security features like VPC, IAM, Security Groups, KMS etc Must be good with Python and PySpark for data pipeline building. Must have data modeling including S3 data organization experience Must have an understanding of hadoop components, No SQL database, graph database and time series database; and AWS services available for those technologies Must have experience of working with structured, semi-structured and unstructured data Must have experience of streaming data collection and processing. Kafka experience is preferred. Experience of migrating data warehouse / big data application to AWS is preferred . Must be able to use Gen AI services (like Amazon Q) for productivity gain Eligibility Criteria: Bachelors degree in computer science, Data Engineering, or a related field. Extensive experience with AWS data services and tools. AWS certification (e.g., AWS Certified Data Analytics - Specialty). Experience with machine learning and AI integration in AWS environments. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Skill - Senior Tech Lead AWS Data Engg Location - Pune
Posted 1 month ago
6.0 - 9.0 years
18 - 25 Lacs
Bangalore Rural, Bengaluru
Work from Office
ETL Tester,ETL/Data Migration Testing,AWS to GCP data migration, PostgreSQL, AlloyDB, Presto, BigQuery, S3, and GCS,Python for test automation,data warehousing and cloud-native tools,PostgreSQL to AlloyDB,Presto to BigQuery,S3 to Google Cloud Storage
Posted 1 month ago
8.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Work from Office
8+ years of hands-on experience with Java, including support and maintenance of legacy codebases. Strong familiarity with AWS services, especially EC2, RDS, S3, CloudWatch, and IAM.
Posted 1 month ago
6.0 - 11.0 years
0 - 1 Lacs
Bengaluru
Work from Office
Job Requirements Please Find below is the JD. Location: Whitefield Bangalore Employment Type: Full-Time Experience Level: Senior Level (9 to 14 years) Work Mode: Work from Office Job Description: We are seeking a highly skilled and experienced Senior Java and AWS Developer to join our dynamic team. The ideal candidate will have a strong background in developing scalable server-side applications and cloud solutions using Java and AWS services. Work Experience Responsibilities: Design, develop, and maintain scalable microservices using Java and Spring Boot. Develop and optimize cloud-based applications on AWS, leveraging services like Lambda, S3,Lambda,RDS and EC2. Write unit and integration tests to maintain software quality. Create and maintain RESTful APIs to support front-end functionality. Ensure application performance, scalability, and security. Implement best practices for cloud architecture and infrastructure. Collaborate with front-end developers, designers, and other stakeholders. Write and maintain technical documentation. Monitor and optimize application performance. Troubleshoot and resolve issues in a timely manner. Qualifications: Bachelors degree in computer science or a related field. Proven experience as a Java Developer. Proficiency in Spring Boot and microservices architecture. Hands-on experience with AWS services such as EC2, S3, Lambda, and RDS. Hands-on Experience with Angular Strong understanding of RESTful API design and development. Familiarity with containerization technologies like Docker. Experience with version control systems, especially Gitlab. Ability to work collaboratively in a team environment. Excellent problem-solving skills and attention to detail. Skills: Java Microservices Spring Boot Angular AWS (EC2, S3, Lambda, RDS) RESTful APIs CFT Git JavaScript/TypeScript Preferred Qualifications: AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Developer) Experience with serverless architectures Knowledge of DevOps practices and CI/CD pipelines
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
At WT Technologies Pvt. Ltd., we are redefining how the event industry embraces digital transformation. With two pioneering products WeTales and Wowsly, we offer a seamless blend of creativity and technology to elevate every celebration. WeTales is our creative powerhouse, specializing in bespoke digital invitations that blend storytelling, animation, 3D design, and personalization to deliver unforgettable first impressions. From save-the-dates to full wedding invite suites, WeTales brings every love story to life visually and emotionally. Wowsly, our event-tech platform, empowers hosts and planners with cutting-edge digital solutions like QR-based check-ins, live RSVP tracking, automated communication, and real-time guest coordination making event management smarter, faster, and smoother. Together, WeTales + Wowsly form a complete ecosystem for modern events offering design to delivery to digital management crafted with precision, passion, and innovation. Key Responsibilities and Accountabilities: - Technology Leadership: - Define the technology vision and strategy for Wowsly.com and Wetales.in. - Ensure scalability, security, and high performance across all platforms. - Evaluate and implement cutting-edge technologies to enhance product offerings. - Development & Architecture: - Oversee front-end development using React.js and ensure seamless user experiences. - Manage back-end development in PHP Laravel, optimizing database interactions and API performance. - Architect scalable and secure solutions on AWS, leveraging cloud-native services. - Team Management: - Lead and mentor a team of developers and engineers, fostering a culture of innovation and collaboration. - Set development processes, code reviews, and quality standards. - Recruit, onboard, and retain top technical talent as the team grows. - Operations & Infrastructure: - Manage AWS infrastructure, ensuring cost optimization, uptime, and reliability. - Oversee CI/CD pipelines, DevOps processes, and automated testing. - Handle system monitoring, debugging, and issue resolution. - Collaboration & Stakeholder Management: - Work with the CEO and product managers to define and prioritize product roadmaps. - Communicate technical challenges and opportunities to non-technical stakeholders. - Ensure alignment between technical capabilities and business objectives. Required Skills & Qualifications: Technical Expertise: - Strong experience in front-end development using React.js. - Proven expertise in back-end development using PHP Laravel. - Hands-on experience with AWS services, including EC2, S3, RDS, Lambda, and CloudFront. - Knowledge of database systems like MySQL, PostgreSQL, or MongoDB. - Proficiency in DevOps tools and processes, including Docker, Kubernetes, Jenkins, etc. - Understanding of web performance optimization, security protocols, and API integrations. Leadership & Management: - 5+ years of experience in technology roles, with at least 3 years in a leadership capacity. - Excellent team management, mentoring, and delegation skills. - Ability to align technology initiatives with business goals and product requirements. Soft Skills: - Strong analytical and problem-solving skills. - Excellent communication and interpersonal abilities. - Entrepreneurial mindset with a passion for innovation. Preferred Qualifications: - Experience with SaaS platforms and B2B products. - Familiarity with event-tech solutions and industry trends. - Prior experience in a start-up or fast-paced work environment. - Understanding of mobile app development (React Native, Flutter, etc.),
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Hybrid
Were looking for a talented and results-oriented Cloud Solutions Architect to work as a key member of Sureifys engineering team. Youll help build and evolve our next-generation cloud-based compute platform for digitally-delivered life insurance. Youll consider many dimensions such as strategic goals, growth models, opportunity cost, talent, and reliability. Youll collaborate closely with the product development team on platform feature architecture such that the architecture aligns with operational needs and opportunities. With the number of customers growing and growing, it’s time for us to mature the fabric our software runs on. This is your opportunity to make a large impact at a high-growth enterprise software company. Key Responsibilities : Collaborate with key stakeholders across our product, delivery, data and support teams to design scalable and secure application architectures on AWS using AWS Services like EC2, ECS, EKS, Lambdas, VPC, RDS, ElastiCache provisioned via Terraform Design and Implement CICD pipelines using Github, Jenkins and Spinnaker and Helm to automate application deployment and updates with key focus on container management, orchestration, scaling, optimizing performance and resource utilization, and deployment strategies Design and Implement security best practices for AWS applications, including Identity and Access Management (IAM), encryption, container security and secure coding practices Design and Implement best practices for Design and implement application observability using Cloudwatch and NewRelic with key considerations and focus on monitoring, logging and alerting to provide insights into application performance and health. Design and implement key integrations of application components and external systems, ensuring smooth and efficient data flow Diagnose and resolve issues related to application performance, availability and reliability Create, maintain and prioritise a quarter over quarter backlog by identifying key areas of improvement such as cost optimization, process improvement, security enhancements etc. Create and maintain comprehensive documentation outlining the infrastructure design, integrations, deployment processes, and configuration Work closely with the DevOps team and as a guide / mentor and enabler to ensure that the practices that you design and implement are followed and imbibed by the team Required Skills: Proficiency in AWS Services such as EC2, ECS, EKS, S3, RDS, VPC, Lambda, SES, SQS, ElastiCache, Redshift, EFS Strong Programming skills in languages such as Groovy, Python, Bash Shell Scripting Experience with CICD tools and practices including Jenkins, Spinnaker, ArgoCD Familiarity with IaC tools like Terraform or Cloudformation Understanding of AWS security best practices, including IAM, KMS Familiarity with Agile development practices and methodologies Strong analytical skills with the ability to troubleshoot and resolve complex issues Proficiency in using observability, monitoring and logging tools like AWS Cloudwatch, NewRelic, Prometheus Knowledge of container orchestration tools and concepts including Kubernetes and Docker Strong teamwork and communication skills with the ability to work effectively with cross function teams Nice to haves AWS Certified Solutions Architect - Associate or Professional
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be working as a Business Intelligence Engineer III in Pune on a 6-month Contract basis with the TekWissen organization. Your primary responsibility will be to work on Data Engineering on AWS, including designing and implementing scalable data pipelines using AWS services such as S3, AWS Glue, Redshift, and Athena. You will also focus on Data Modeling and Transformation by developing and optimizing dimensional data models to support various business intelligence and analytics use cases. Additionally, you will collaborate with stakeholders to understand reporting and analytics requirements and build interactive dashboards and reports using visualization tools like the client's QuickSight. Your role will also involve implementing data quality checks and monitoring processes to ensure data integrity and reliability. You will be responsible for managing and maintaining the AWS infrastructure required for the data and analytics platform, optimizing performance, cost, and security of the underlying cloud resources. Collaboration with cross-functional teams and sharing knowledge and best practices will be essential for identifying data-driven insights. As a successful candidate, you should have at least 3 years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficiency in designing and implementing data pipelines using AWS services like S3, Glue, Redshift, Athena, and Lambda is mandatory. You should also possess expertise in data modeling, dimensional modeling, data transformation techniques, and experience in deploying business intelligence solutions using tools like QuickSight and Tableau. Strong SQL and Python programming skills are required for data processing and analysis. Knowledge of cloud architecture patterns, security best practices, and cost optimization on AWS is crucial. Excellent communication and collaboration skills are necessary to effectively work with cross-functional teams. Hands-on experience with Apache Spark, Airflow, or other big data technologies, as well as familiarity with AWS DevOps practices and tools, agile software development methodologies, and AWS certifications, will be considered as preferred skills. The position requires a candidate with a graduate degree and TekWissen Group is an equal opportunity employer supporting workforce diversity.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
At our organization, we prioritize people and are dedicated to providing cutting-edge AI solutions with integrity and passion. We are currently seeking a Senior AI Developer who is proficient in AI model development, Python, AWS, and scalable tool-building. In this role, you will play a key part in designing and implementing AI-driven solutions, developing AI-powered tools and frameworks, and integrating them into enterprise environments, including mainframe systems. Your responsibilities will include developing and deploying AI models using Python and AWS for enterprise applications, building scalable AI-powered tools, designing and optimizing machine learning pipelines, implementing NLP and GenAI models, developing Retrieval-Augmented Generation (RAG) systems, maintaining AI frameworks and APIs, architecting cloud-based AI solutions using AWS services, writing high-performance Python code, and ensuring the scalability, security, and performance of AI solutions in production. To qualify for this role, you should have at least 5 years of experience in AI/ML development, expertise in Python and AWS, a strong background in machine learning and deep learning, experience in LLMs, NLP, and RAG systems, hands-on experience in building and deploying AI models, proficiency in cloud-based AI solutions, experience in developing AI-powered tools and frameworks, knowledge of mainframe integration and enterprise AI applications, and strong coding skills with a focus on software development best practices. Preferred qualifications include familiarity with MLOps, CI/CD pipelines, and model monitoring, a background in developing AI-based enterprise tools and automation, and experience with vector databases and AI-powered search technologies. Additionally, you will benefit from health insurance, accident insurance, and a competitive salary based on various factors including location, education, qualifications, experience, technical skills, and business needs. You will also be expected to actively participate in monthly team meetings, team-building efforts, technical discussions, peer reviews, contribute to the OP-Wiki/Knowledge Base, and provide status reports to OP Account Management as required. OP is a technology consulting and solutions company that offers advisory and managed services, innovative platforms, and staffing solutions across various fields such as AI, cybersecurity, and enterprise architecture. Our team is comprised of dynamic, creative thinkers who are dedicated to delivering quality work. As a member of the OP team, you will have access to industry-leading consulting practices, strategies, technologies, innovative training, and education. We are looking for a technology leader with a strong track record of technical excellence and a focus on process and methodology.,
Posted 1 month ago
2.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing, developing, and maintaining data pipelines and ETL workflows for processing large-scale structured/unstructured data. Your expertise in AWS Data Services (S3, Workflows, Databricks, SQL), big data processing, real-time analytics, and cloud data integration is crucial for this role. Additionally, your experience in Team Leading will be valuable. Your key responsibilities will include redesigning optimized and scalable ETL using Spark, Python, SQL, UDF, implementing ETL/ELT Databricks workflows for structured data processing, and creating Data quality checks using Unity Catalog. You will also be expected to drive daily status calls, sprint planning meetings, and ensure the security, quality, and compliance of data pipelines. Collaborating with data architects and analysts to meet business requirements is also a key aspect of this role. To qualify for this position, you should have at least 8 years of experience in data engineering, with a minimum of 2 years working on AWS services. Hands-on experience with tools like S3, Databricks, or Workflows is essential. Knowledge in Adverity, experience with ticketing tools like Asana or JIRA, and data analyzing skills are considered advantageous. Strong SQL and data processing skills (e.g., PySpark, Python) are required, along with experience in data cataloging, lineage, and governance frameworks. Your contribution to CI/CD integration, observability, and documentation, as well as your ability to quickly analyze issues and drive collaboration within the team, will be instrumental in achieving the goals of the organization.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Principal Data Engineer (Associate Director) at Fidelity in Bangalore, you will be an integral part of the ISS Data Platform Team. This team plays a crucial role in building and maintaining the platform that supports the ISS business operations. You will have the opportunity to lead a team of senior and junior developers, providing mentorship and guidance, while taking ownership of delivering a subsection of the wider data platform. Your role will involve designing, developing, and maintaining scalable data pipelines and architectures to facilitate data ingestion, integration, and analytics. Collaboration will be a key aspect of your responsibilities as you work closely with enterprise architects, business analysts, and stakeholders to understand data requirements, validate designs, and communicate progress. Your innovative mindset will drive technical advancements within the department, focusing on enhancing code reusability, quality, and developer productivity. By challenging the status quo and incorporating the latest data engineering practices and techniques, you will contribute to the continuous improvement of the data platform. Your expertise in leveraging cloud-based data platforms, particularly Snowflake and Databricks, will be essential in creating an enterprise lake house. Additionally, your advanced proficiency in the AWS ecosystem and experience with core AWS data services like Lambda, EMR, and S3 will be highly valuable. Experience in designing event-based or streaming data architectures using Kafka, along with strong skills in Python and SQL, will be crucial for success in this role. Furthermore, your role will involve implementing data access controls to ensure data security and performance optimization in compliance with regulatory requirements. Proficiency in CI/CD pipelines for deploying infrastructure and pipelines, experience with RDBMS and NOSQL offerings, and familiarity with orchestration tools like Airflow will be beneficial. Your soft skills, including problem-solving, strategic communication, and project management, will be key in leading problem-solving efforts, engaging with stakeholders, and overseeing project lifecycles. By joining our team at Fidelity, you will not only receive a comprehensive benefits package but also support for your wellbeing and professional development. We are committed to creating a flexible work environment that prioritizes work-life balance and motivates you to contribute effectively to our team. To explore more about our work culture and opportunities for growth, visit careers.fidelityinternational.com.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |