Jobs
Interviews

1828 Sqs Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

0 Lacs

Greater Hyderabad Area

On-site

Expertise in AWS services like EC2, CloudFormation, S3, IAM, SNS, SQS, EMR, Athena, Glue, lake formation etc. Expertise in Hadoop/EMR/DataBricks with good debugging skills to resolve hive and spark related issues. Sound fundamentals of database concepts and experience with relational or non-relational database types such as SQL, Key-Value, Graphs etc. Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in programming languages such as Python/PySpark. Excellent written and verbal communication skills. Key Responsibilities Working closely with the Data lake engineers to provide technical guidance, consultation and resolution of their queries. Assist in development of simple and advanced analytics best practices, processes, technology & solution patterns and automation (including CI/CD) Working closely with various stakeholders in US team with a collaborative approach. Develop data pipeline in python/pyspark to be executed in AWS cloud. Set up analytics infrastructure in AWS using cloud formation templates. Develop mini/micro batch, streaming ingestion patterns using Kinesis/Kafka. Seamlessly upgrading the application to higher version like Spark/EMR upgrade. Participates in the code reviews of the developed modules and applications. Provides inputs for formulation of best practices for ETL processes / jobs written in programming languages such as PySpak and BI processes. Working with column-oriented data storage formats such as Parquet , interactive query service such as Athena and event-driven computing cloud service - Lambda Performing R&D with respect to the latest and greatest Big data in the market, perform comparative analysis and provides recommendations to choose the best tool as per the current and future needs of the enterprise. Required Qualifications Bachelors or Masters degree in Computer Science or similar field 2-4 years of strong expeirence in big data development Expertise in AWS services like EC2, CloudFormation, S3, IAM, SNS, SQS, EMR, Athena, Glue, lake formation etc. Expertise in Hadoop/EMR/DataBricks with good debugging skills to resolve hive and spark related issues. Sound fundamentals of database concepts and experience with relational or non-relational database types such as SQL, Key-Value, Graphs etc. Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in programming languages such as Python/PySpark. Excellent written and verbal communication skills. Preferred Qualifications Cloud certification (AWS, Azure or GCP) About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are looking for an experienced and highly skilled Senior .NET Developer with strong expertise in .NET technologies, SQL Server, and AWS cloud services/Azure. The ideal candidate should have extensive experience in designing and developing scalable enterprise applications, client interaction, and leading development initiatives in Agile environments. Key Responsibilities: Work closely with clients and stakeholders to gather requirements and analyze business needs. Lead development activities using .NET Core, C#, MVC, Entity Framework, Web API, and JavaScript frameworks. Design and implement complex database solutions using SQL Server (writing stored procedures, optimizing queries). Contribute to UI/UX implementation using jQuery, Bootstrap, and Kendo UI. Develop and maintain ETL processes using SSIS. Participate in all Scrum ceremonies (daily stand-ups, sprint planning, reviews, and retrospectives). Estimate user stories, perform UI and DB design, and ensure high-quality code through unit testing. Manage code deployment across Dev, Staging, and Production environments. Provide production support and troubleshoot issues effectively. Monitor project health daily and support junior/associate developers across the SDLC. Ensure compliance with agile practices and quality standards. Participate in release calls and resolve deployment-related issues. Conduct daily/weekly production monitoring and job scheduling. Provide mentorship to team members and support technical problem-solving. Must-Have Skills: 7+ years of experience in .NET development (C#, .NET Core, MVC, Web API). Strong experience with SQL Server – complex queries, stored procedures, performance tuning. Hands-on experience with front-end technologies – JavaScript, jQuery, Bootstrap, Kendo UI. Experience with SSIS for ETL and data transformation. Strong understanding of Agile methodologies and participation in Scrum ceremonies. Knowledge of AWS services – EC2, SQS, SNS, Lambda, Containers, API Gateway. Excellent communication and interpersonal skills. Good-to-Have / Preferred Skills: Mortgage or Financial Services domain knowledge. Experience with large-scale applications and high-volume transactional systems. Exposure to AI-driven development and QA automation tools. Familiarity with Linux and open-source tools. Experience with additional RDBMS – MySQL, PostgreSQL. Knowledge of software design patterns, refactoring techniques, and unit testing frameworks. Experience with modern DevOps practices and CI/CD pipeline

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurgaon

On-site

About Payoneer Founded in 2005, Payoneer is the global financial platform that removes friction from doing business across borders, with a mission to connect the world's underserved businesses to a rising global economy. We're a community with over 2,500 colleagues all over the world, working to serve customers, and partners in over 190 markets. By taking the complexity out of the financial workflows–including everything from global payments and compliance, to multi-currency and workforce management, to providing working capital and business intelligence–we give businesses the tools they need to work efficiently worldwide and grow with confidence. Location: Gurugram - India Role Summary We're looking for a Backend Team Lead who is a tech-enthusiast with a drive for excellence, and an out-of-the-box mindset to satisfy business needs in a complex payment solution environment. What you'll be spending your time on: Collaborate closely with Product, Design/UX, DevOps, and other R&D teams. Lead an autonomous team of software engineers who are working closely with product management to achieve business goals. Put special focus on developing the team. Provide technical authority to your team by demonstrating a hands-on leadership style. Be responsible for the overall design, development, architecture, code quality, and production environment deployment of your team. What we are looking for: 5 + years of experience as a server-side developer , using C# (a must!), REST APIs, webhooks, and asynchronous communication with Queues/Streams (RabbitMQ and Kafka). 3+ years of experience with SQL (MSSQL/Oracle/MySql/PostgreSQL etc.). 2+ years experience with observability systems (Dynatrace, Grafana, etc.) 3+ years of managerial experience, leading development of customer-facing products– a must! Experience with messaging queues or streams such as RabbitMQ/SQS/Kafka. Broad knowledge of OOP and design patterns. Experience with Microservices. Experience in engineering best practices (writing unit test, code-reviews, testing coverage, agile methodologies). Team player attitude and mentality. Experienced and passionate about managing and growing people. Ambitious and eager to learn new things. B.E / BS in computer science or equivalent degree. Not a must but a great advantage : Experience with Redis or alike. Experience with ORM such as Entity Framework. Experience in building SaaS platforms in a cloud-based/hybrid environment. Who we are: Payoneer (NASDAQ: PAYO) is the world's go-to partner for digital commerce, everywhere. From borderless payments to boundless growth, Payoneer promises any business, in any market, the technology, connections and confidence to participate and flourish in the new global economy. Powering growth for customers ranging from aspiring entrepreneurs in emerging markets to the world's leading brands, Payoneer offers a universe of opportunities, open to you. The Payoneer Ways of Working Act as our customer's partner on the inside Learning what they need and creating what will help them go further. Continuously improve Always striving for a higher standard than our last. Do it. Own it. Being fearlessly accountable in everything we do. Build each other up Helping each other grow, as professionals and people. If this sounds like a business, a community, and a mission you want to be part of, click now to apply. We are committed to providing a diverse and inclusive workplace. Payoneer is an equal opportunity employer, and all qualified applicants will receive consideration for employment no matter your race, color, ancestry, religion, sex, sexual orientation, gender identity, national origin, age, disability status, protected veteran status, or any other characteristic protected by law. If you require reasonable accommodation at any stage of the hiring process, please speak to the recruiter managing the role for any adjustments. Decisions about requests for reasonable accommodation are made on a case-by-case basis.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Summary: For the Infor ION Common Services team we are looking for a highly motivated and result-driven Development Support Engineer. As a DevSupport Engineer, you are the linking pin between the Infor Support organization and the Development team. You will respond to incoming support issues that cannot be handled by Infor Support. You will analyze the issues and propose solutions. You will work with your DevSupport peers in both India and the U.S. Issues that are caused by flaws in the software will be analyzed together with the Development team. You will work within a multidisciplinary agile development team in close collaboration with other skilled software engineers, QA engineers, architects and business analysts. You determine your daily tasks in coordination with these colleagues rather than waiting for work to be assigned to you. A Day in The Life Typically Includes: What You Will Need: Required skills: * Bachelor or Master of Technology in Computer Science Engineering, Electronics Engineering; * Several years of experience in a software development or support environment; * Basic Java development skills; * Analytical, accurate and result driven; * Proactive problem solver; * Good communication skills (written and oral) in English; * Experience with and enjoys working within an international agile development environment; What Will Put You Ahead? Preferred qualifications: * Experience in working in an international environment * Database skills in native SQL; * Amazon Web Services, especially EC2, S3, SQS, ELB, OpenSearch; About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called [1] Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage. At Infor we value your privacy that’s why we created a policy that you can read [2] here. References Visible links 1. https://www.kochind.com/about/business-philosophy 2. https://www.infor.com/about/privacy

Posted 1 week ago

Apply

3.0 - 8.0 years

3 - 8 Lacs

Gurgaon

On-site

About Payoneer Founded in 2005, Payoneer is the global financial platform that removes friction from doing business across borders, with a mission to connect the world's underserved businesses to a rising global economy. We're a community with over 2,500 colleagues all over the world, working to serve customers, and partners in over 190 markets. By taking the complexity out of the financial workflows–including everything from global payments and compliance, to multi-currency and workforce management, to providing working capital and business intelligence–we give businesses the tools they need to work efficiently worldwide and grow with confidence. Location: Gurugram - India Full-time What You'll Be Spending Your Time On: Take a leadership role in achieving team goals, contributing to the overall design, architecture, development, quality, and production deployment of the team's systems Design and implement robust, scalable, and maintainable backend solutions for complex scenarios, ensuring high-quality results that may be consumed by other teams. Collaborate effectively within your team and with cross-functional partners, such as Product, Design/UX, DevOps, and other R&D teams, representing your team as needed. Maintain and improve the team's engineering practices, suggesting and implementing technology, patterns, or process enhancements. Proactively identify areas of improvement in team systems, processes, and scalability. Lead by example in code quality, contributing significantly to code reviews and acting as a focal point for engineering excellence questions. Help monitor production systems, investigate potential issues, and lead efforts to resolve critical production challenges while maintaining a customer-centric approach. Have You Done This Kind of Stuff: 3 - 8 years in backend software engineering roles, with demonstrated ability to navigate technical trade-offs and ambiguity effectively. Strong proficiency in C#, Java, or any similar object-oriented languages. Hands-on experience in SQL Server and database management Experience with message queues or streaming platforms (e.g., RabbitMQ, SQS, Kafka). Experience in writing unit test and strong knowledge of design principles, data structures and algorithms. Experience with microservices architecture. Experience in designing new functionality for existing complex components while maintaining scalability and performance. Ability to collaborate effectively and communicate technical concepts to diverse stakeholders. BSc/BE/B.Tech in Computer Science, Software Engineering, or equivalent degree Not a Must, but a Great Advantage: Practical experience with Agile development methodologies. Familiarity with cloud platforms (AWS, Azure, or Google Cloud). Practical knowledge of non-relational databases (eg. MongoDB) Experience in mentoring new hires and interns, fostering a culture of collaboration and best practices. The Payoneer Ways of Working Act as our customer's partner on the inside Learning what they need and creating what will help them go further. Continuously improve Always striving for a higher standard than our last. Do it. Own it. Being fearlessly accountable in everything we do. Build each other up Helping each other grow, as professionals and people. If this sounds like a business, a community, and a mission you want to be part of, click now to apply. We are committed to providing a diverse and inclusive workplace. Payoneer is an equal opportunity employer, and all qualified applicants will receive consideration for employment no matter your race, color, ancestry, religion, sex, sexual orientation, gender identity, national origin, age, disability status, protected veteran status, or any other characteristic protected by law. If you require reasonable accommodation at any stage of the hiring process, please speak to the recruiter managing the role for any adjustments. Decisions about requests for reasonable accommodation are made on a case-by-case basis.

Posted 1 week ago

Apply

4.0 years

1 - 3 Lacs

Hyderābād

On-site

Java AWS engineer with experience in building AWS services like Lambda, Batch, SQS, S3, DynamoDB etc. using AWS Java SDK and Cloud formation templates. 4 to 8 years of experience in design, development and triaging for large, complex systems. Experience in Java and object-oriented design skills 3-4+ years of microservices development & Mutlithreading 2+ years working in Spring Boot Experienced using API dev tools like IntelliJ/Eclipse, Postman, Git, Cucumber Hands on experience in building microservices based application using Spring Boot and REST, JSON DevOps understanding – containers, cloud, automation, security, configuration management, CI/CD Experience using CICD processes for application software integration and deployment using Maven, Git, Jenkins. Experience dealing with NoSQL databases like Cassandra Experience building scalable and resilient applications in private or public cloud environments and cloud technologies Experience in Utilizing tools such as Maven, Docker, Kubernetes, ELK, Jenkins Agile Software Development (typically Scrum, Kanban, Safe) Experience with API gateway and API security.

Posted 1 week ago

Apply

7.0 years

1 - 9 Lacs

Bengaluru

On-site

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Senior Software Engineer – Data Modernization (GenAI) Location: Manyata Tech Park, Bangalore (Hybrid) Business & Team: CommSec is Australia's largest online retail stockbroker. It is one of the most highly visible and visited online assets in Australian financial services. CommSec’s systems utilise a variety of technologies and support a broad range of investors. Engineers within CommSec are offered regular opportunities to work on some of the finest IT systems in Australia, as well as having opportunity to develop careers across different functions and teams within the wider Bank. Impact & Contribution: Apply core concepts, technology and domain expertise to effectively develop software solutions to meet business needs. You will contribute to building the brighter future for all by ensuring that our team builds the best solutions possible using modern development practices that ensure both functional and non-functional needs are met. If you have a history of building a culture of empowerment and know what it takes to be a force multiplier within a large organization, then you’re the kind of person we are looking for. You will report to the Lead Engineer within Business Banking Technology. Roles & Responsibilities: Build scalable agentic AI solutions that integrate with existing systems and support business objectives. Implement MLOps pipelines Design and conduct experiments to evaluate model performance and iteratively refine models based on findings. Hands on experience in automated LLM outcome validation and metrication of AI outputs. Good knowledge of ethical AI practices and tools to implement. Hand-on experience in AWS cloud services such as SNS, SQS, Lambda. Experience in big data platform technologies such as to Spark framework and Vector DB. Collaborate with Software engineers to deploy AI models in production environments, ensuring robustness and scalability. Participate in research initiatives to explore new AI models and methodologies that can be applied to current and future products. Develop and implement monitoring systems to track the performance of AI models in production. Hands on DevSecOps experience including continuous integration/continuous deployment, security practices. Essential Skills: The AI Engineer will involve in the development and deployment of advanced AI and machine learning models. The ideal candidate is highly skilled in MLOps and software engineering, with a strong track record of developing AI models and deploying them in production environments. 7+ years' experience RAG, Prompt Engineering Vector DB, Dynamo DB, Redshift Spark framework, Parquet, Iceberg Python MLOps Langfuse, LlamaIndex, MLflow, Gleu, Bleu AWS cloud services such as SNS, SQS, Lambda Traditional Machine Learning Education Qualifications: Bachelor’s degree or Master's Degree in engineering in Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 06/08/2025

Posted 1 week ago

Apply

13.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Location: Bangalore About LeadSquared One of the fastest-growing SaaS Unicorn companies in the CRM space, LeadSquared empowers organizations with the power of automation. More than 2000 customers with 2 lakhs+ users across the globe utilize the LeadSquared platform to automate their sales and marketing processes and run high-velocity sales at scale. We are backed by prominent investors such as Westbridge Capital, Stakeboat Capital, and Gaja Capital to name a few. We are expanding rapidly and our 1300+ strong and still growing workforce is spread across India, the U.S, the Middle East, ASEAN, ANZ, and South Africa. Among Top 50 fastest growing tech companies in India as per Deloitte Fast 50 programs Frost and Sullivan's 2019 Marketing Automation Company of the Year award Among Top 100 fastest growing companies in FT 1000: High-Growth Companies Asia-Pacific Listed as Top Rates Product on G2Crowd, GetApp, and TrustRadius Engineering @ LeadSquared At LeadSquared, we like being up to date with the latest technology and utilizing the trending tech stacks to build our product. By joining the engineering team, you get to work first-hand with the latest web and mobile technologies and solve the challenges of scale, performance, security, and cost optimization. Our goal is to build the best SaaS platform for sales execution in the industry and what better place than LeadSquared for an exciting career? The Role LeadSquared platform and product suite are 100% on the cloud and currently all on AWS. The product suite comprises of a large number of applications, services, and APIs built on various open-source and AWS native tech stacks and deployed across multiple AWS accounts. The role involves leading the mission-critical responsibility of ensuring that all our online services are available, reliable, secure, performant, and running at optimal costs. We firmly believe in a code and automation-driven approach to Site Reliability. Responsibilities Taking ownership of release management with effective build and deployment processes by collaborating with development teams. Infrastructure and configuration management of production systems. Be a stakeholder in product scoping, performance enhancement, cost optimization, and architecture discussions with the Engineering leaders. Automate DevOps functions and full control of source code repository management with continuous integration. Strong understanding of Product functionality, customers’ use cases, and architecture. Prioritize and meet the SLA for incidents and service management; also, to ensure that projects are managed and delivered on time and quality. Recommend new technologies and tools that will automate manual tasks, better observability, and faster troubleshooting. Need to make sure the team adheres to compliance and company policies with regular audits. Motivating, empowering, and improving the team’s technical skills. Requirements 13+ years’ experience in building, deploying and scaling software applications on AWS cloud. (Preferably in SaaS) Deep understanding of observability and cost optimization of all major AWS services – EC2, RDS, Elasticsearch, Redis, SQS, API Gateway, Lambda, etc. AWS certification is a plus. Experience in building tools for deployment automation and observability response management for AWS resources. .NET, Python, and CFTs or Terraform are preferred. Operational experience in deploying, operating, scaling, and troubleshooting large-scale production systems on the cloud. Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other DevOps and engineering teams. Function well in a fast-paced, rapidly changing environment. 5+ years’ experience in people management. Why Should You Apply? Fast-paced environment Accelerated Growth & Rewards Easily approachable management Work with the best minds and industry leaders Flexible work timings Interested? If this role sounds like you, then apply with us! You have plenty of room for growth at LeadSquared.

Posted 1 week ago

Apply

8.0 years

30 - 35 Lacs

Bengaluru

Remote

Location : Preferred Hyd or Bangalore but can be remote for right candidate. Exp: 8-10+ years Skillset Design and implement robust, production-grade pipelines using Python, Spark SQL, and Airflow to process high-volume file-based datasets (CSV, Parquet, JSON). Own the full lifecycle of core pipelines — from file ingestion to validated, queryable datasets — ensuring high reliability and performance. Build resilient, idempotent transformation logic with data quality checks, validation layers, and observability. Refactor and scale existing pipelines to meet growing data and business needs. Tune Spark jobs and optimize distributed processing performance. Implement schema enforcement and versioning aligned with internal data standards. Collaborate deeply with Data Analysts, Data Scientists, Product Managers, Engineering, Platform, SMEs, and AMs to ensure pipelines meet evolving business needs. Monitor pipeline health, participate in on-call rotations, and proactively debug and resolve production data flow issues. Contribute to the evolution of our data platform — driving toward mature patterns in observability, testing, and automation. Build and enhance streaming pipelines (Kafka, SQS, or similar) where needed to support near-real-time data needs. Help develop and champion internal best practices around pipeline development and data modeling. Experience 8-10 years of experience as a Data Engineer (or equivalent), building production-grade pipelines. Strong expertise in Python, Spark SQL, and Airflow. Experience processing large-scale file-based datasets (CSV, Parquet, JSON, etc) in production environments. Experience mapping and standardizing raw external data into canonical models. Familiarity with AWS (or any cloud), including file storage and distributed compute concepts. Ability to work across teams, manage priorities, and own complex data workflows with minimal supervision. Strong written and verbal communication skills — able to explain technical concepts to non-engineering partners. Comfortable designing pipelines from scratch and improving existing pipelines. Experience working with large-scale or messy datasets (healthcare, financial, logs, etc.). Experience building or willingness to learn streaming pipelines using tools such as Kafka or SQS. Bonus: Familiarity with healthcare data (837, 835, EHR, UB04, claims normalization). Please share your updated resume with below details: Highest Education: Total and Relevant Exp: CCTC: ECTC: Any offer in hand or in pipeline: Notice period:Current location: Job Type: Full-time Pay: ₹3,000,000.00 - ₹3,500,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Payoneer Founded in 2005, Payoneer is the global financial platform that removes friction from doing business across borders, with a mission to connect the world’s underserved businesses to a rising global economy. We’re a community with over 2,500 colleagues all over the world, working to serve customers, and partners in over 190 markets. By taking the complexity out of the financial workflows–including everything from global payments and compliance, to multi-currency and workforce management, to providing working capital and business intelligence–we give businesses the tools they need to work efficiently worldwide and grow with confidence. Location: Gurugram - India Full-time What You’ll Be Spending Your Time On Take a leadership role in achieving team goals, contributing to the overall design, architecture, development, quality, and production deployment of the team's systems Design and implement robust, scalable, and maintainable backend solutions for complex scenarios, ensuring high-quality results that may be consumed by other teams. Collaborate effectively within your team and with cross-functional partners, such as Product, Design/UX, DevOps, and other R&D teams, representing your team as needed. Maintain and improve the team's engineering practices, suggesting and implementing technology, patterns, or process enhancements. Proactively identify areas of improvement in team systems, processes, and scalability. Lead by example in code quality, contributing significantly to code reviews and acting as a focal point for engineering excellence questions. Help monitor production systems, investigate potential issues, and lead efforts to resolve critical production challenges while maintaining a customer-centric approach. Have You Done This Kind Of Stuff 3 - 8 years in backend software engineering roles, with demonstrated ability to navigate technical trade-offs and ambiguity effectively. Extensive experience in C# and .Net ecosystem is a mandatory requirement. Hands-on experience in SQL Server and database management Experience with message queues or streaming platforms (e.g., RabbitMQ, SQS, Kafka). Experience in writing unit test and strong knowledge of design principles, data structures and algorithms. Experience with microservices architecture. Experience in designing new functionality for existing complex components while maintaining scalability and performance. Ability to collaborate effectively and communicate technical concepts to diverse stakeholders. BSc/BE/B.Tech in Computer Science, Software Engineering, or equivalent degree Not a Must, But a Great Advantage Practical experience with Agile development methodologies. Familiarity with cloud platforms (AWS, Azure, or Google Cloud). Practical knowledge of non-relational databases (eg. MongoDB) Experience in mentoring new hires and interns, fostering a culture of collaboration and best practices. The Payoneer Ways of Working Act as our customer’s partner on the inside Learning what they need and creating what will help them go further. Continuously improve Always striving for a higher standard than our last. Do it. Own it. Being fearlessly accountable in everything we do. Build Each Other Up Helping each other grow, as professionals and people. If this sounds like a business, a community, and a mission you want to be part of, click now to apply. We are committed to providing a diverse and inclusive workplace. Payoneer is an equal opportunity employer, and all qualified applicants will receive consideration for employment no matter your race, color, ancestry, religion, sex, sexual orientation, gender identity, national origin, age, disability status, protected veteran status, or any other characteristic protected by law. If you require reasonable accommodation at any stage of the hiring process, please speak to the recruiter managing the role for any adjustments. Decisions about requests for reasonable accommodation are made on a case-by-case basis.

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Payoneer Founded in 2005, Payoneer is the global financial platform that removes friction from doing business across borders, with a mission to connect the world’s underserved businesses to a rising global economy. We’re a community with over 2,500 colleagues all over the world, working to serve customers, and partners in over 190 markets. By taking the complexity out of the financial workflows–including everything from global payments and compliance, to multi-currency and workforce management, to providing working capital and business intelligence–we give businesses the tools they need to work efficiently worldwide and grow with confidence. Location: Gurugram - India Full-time What You’ll Be Spending Your Time On Take a leadership role in achieving team goals, contributing to the overall design, architecture, development, quality, and production deployment of the team's systems Design and implement robust, scalable, and maintainable backend solutions for complex scenarios, ensuring high-quality results that may be consumed by other teams. Collaborate effectively within your team and with cross-functional partners, such as Product, Design/UX, DevOps, and other R&D teams, representing your team as needed. Maintain and improve the team's engineering practices, suggesting and implementing technology, patterns, or process enhancements. Proactively identify areas of improvement in team systems, processes, and scalability. Lead by example in code quality, contributing significantly to code reviews and acting as a focal point for engineering excellence questions. Help monitor production systems, investigate potential issues, and lead efforts to resolve critical production challenges while maintaining a customer-centric approach. Have You Done This Kind Of Stuff 3 - 8 years in backend software engineering roles, with demonstrated ability to navigate technical trade-offs and ambiguity effectively. Strong proficiency in C#, Java, or any similar object-oriented languages. Hands-on experience in SQL Server and database management Experience with message queues or streaming platforms (e.g., RabbitMQ, SQS, Kafka). Experience in writing unit test and strong knowledge of design principles, data structures and algorithms. Experience with microservices architecture. Experience in designing new functionality for existing complex components while maintaining scalability and performance. Ability to collaborate effectively and communicate technical concepts to diverse stakeholders. BSc/BE/B.Tech in Computer Science, Software Engineering, or equivalent degree Not a Must, But a Great Advantage Practical experience with Agile development methodologies. Familiarity with cloud platforms (AWS, Azure, or Google Cloud). Practical knowledge of non-relational databases (eg. MongoDB) Experience in mentoring new hires and interns, fostering a culture of collaboration and best practices. The Payoneer Ways of Working Act as our customer’s partner on the inside Learning what they need and creating what will help them go further. Continuously improve Always striving for a higher standard than our last. Do it. Own it. Being fearlessly accountable in everything we do. Build Each Other Up Helping each other grow, as professionals and people. If this sounds like a business, a community, and a mission you want to be part of, click now to apply. We are committed to providing a diverse and inclusive workplace. Payoneer is an equal opportunity employer, and all qualified applicants will receive consideration for employment no matter your race, color, ancestry, religion, sex, sexual orientation, gender identity, national origin, age, disability status, protected veteran status, or any other characteristic protected by law. If you require reasonable accommodation at any stage of the hiring process, please speak to the recruiter managing the role for any adjustments. Decisions about requests for reasonable accommodation are made on a case-by-case basis.

Posted 1 week ago

Apply

5.0 years

3 - 6 Lacs

Coimbatore

Remote

Title : Node Js Developer Experience : 5+ Years Location : Coimbatore and Remote Preferred Skills : Node Js , AWS , Salesforce / SAP Integration Job Brief We are looking for experienced Node.js + AWS Integration Developers to design, build, and maintain scalable, cloud-native integration solutions. The ideal candidates will have a strong background in backend development, AWS cloud services, and enterprise system integrations like Salesforce or SAP. You will collaborate with architects and cross-functional teams to define and implement best practices in integration projects. Key Responsibilities Design and develop scalable Node.js applications for system integrations. Build and maintain secure AWS Lambda functions and leverage services like API Gateway, SQS, Step Functions . Collaborate with architects and teams to define integration strategies and standards. Troubleshoot, debug, and optimize the performance of integration pipelines. Ensure secure API development with proper authentication and authorization mechanisms. Contribute to CI/CD pipelines, logging, and monitoring of cloud services. Requirements 5+ years of Node.js backend development experience. Strong hands-on experience with AWS services (Lambda, API Gateway, SQS, Step Functions). Proven experience in integrating Salesforce or SAP systems . Deep understanding of RESTful API design , security, and authentication best practices. Experience with CI/CD pipelines and AWS monitoring tools. If you're interested actively looking for change , kindly give me the call back or mail to this below details Email Id : nivetha.br@applogiq.org Ph Num : 9600377933 Job Type: Full-time Work Location: In person

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

7.0 years

6 - 10 Lacs

Noida

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills and Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 11 Lacs

Noida

On-site

Job Description for Automation QA Key Responsibilities ● Test web and mobile applications / services, ensuring they meet high-quality standards. ● Conduct thorough testing of e-commerce platforms in the automobile domain (e.g., carwale.com, cars24.com). ● Perform backend REST API testing, ensuring correct data in databases and debugging issues through logs, network responses, and database validations. ● Collaborate with cross-functional teams (developers, product managers, DevOps) to define and execute comprehensive test plans and strategies. ● Analyze and debug integration workflows, particularly with third-party services such as payment gateways and authentication providers. ● Ensure exceptional frontend UI/UX quality with meticulous attention to detail. ● Write, execute, and maintain detailed test cases based on user stories and business requirements. ● Conduct regression, integration, and user acceptance testing (UAT) to validate product functionality. ● Monitor and analyze test results, report defects, and collaborate with developers for resolution. ● Use tools such as Postman, browser developer tools, and bug-tracking systems like JIRA effectively. ● Coordinate testing activities across multiple releases and environments. ● Facilitate test preparation, execution, and reporting while ensuring alignment with Agile frameworks. ● Maintain and update test documentation following requirement changes. ● Participate in daily stand-ups and sprint planning discussions, contributing to feature validation and delivery goals. ● Monitor and triage issues in collaboration with cross-functional teams to resolve them efficiently. Required Skills & Qualifications ● 3-5 years of experience in automation testing with hands-on exposure on web and backend testing, preferably in the e-commerce/automobile industry. ● Strong proficiency in testing tools like Postman, browser developer tools, and bug-tracking systems. ● Solid understanding of SQL, PostgreSQL, Python or MongoDB for data verification. ● Familiarity with async communication in service (e.g., AWS SQS, Apache Kafka) and debugging issues therein. ● Excellent knowledge of the software testing lifecycle (STLC) and Agile testing methodologies. ● Experience with version control systems like Git. ● Proven ability to debug issues in API integrations, logs, and databases. ● Strong communication and documentation skills for reporting bugs and preparing detailed test reports. ● Understanding of regression testing frameworks and expertise in functional and integration testing. Additional Preferred Qualifications ● Experience with mobile testing frameworks and tools. ● Basic understanding of performance testing and debugging for optimized user experiences. ● Exposure to automation tools (not mandatory but advantageous). Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,100,000.00 per year Benefits: Flexible schedule Experience: QA: 3 years (Preferred) Postman: 2 years (Preferred) SQL: 2 years (Preferred) JIRA: 2 years (Preferred) Automation: 3 years (Preferred) Selenium with Java: 2 years (Preferred) Manual Testing: 3 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

7.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY – Consulting – AWS Data Engineering Manager The Opportunity We are seeking an experienced and visionary AWS Data Engineering Manager to lead our data engineering initiatives within the Consulting practice and have about 7+ years of experience. This role is ideal for a strategic thinker with a strong technical foundation in AWS and data engineering, who can guide teams, architect scalable solutions, and drive innovation in data platforms. You will play a pivotal role in shaping data strategies, mentoring teams, and delivering impactful solutions for our clients. Key Responsibilities Lead the design and implementation of scalable data pipelines using AWS technologies, supporting both batch and real-time data processing. Architect robust data lake solutions based on the Medallion Architecture using Amazon S3 and integrate with Redshift and Oracle for downstream analytics. Oversee the development of data ingestion frameworks from diverse sources including on-premise databases, batch files, and Kafka streams. Guide the development of Spark streaming applications on Amazon EMR and batch processing using AWS Glue and Python. Manage workflow orchestration using Apache Airflow and ensure operational excellence through monitoring and optimization. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to align data solutions with business goals. Provide technical leadership, mentorship, and performance management for a team of data engineers. Engage with clients to understand business requirements, define data strategies, and deliver high-quality solutions. Required Skills And Experience Proven leadership experience in managing data engineering teams and delivering complex data solutions. Deep expertise in AWS services including S3, Redshift, Glue, EMR, and Oracle. Strong programming skills in Python and Spark, with a solid understanding of data modeling and ETL frameworks. Hands-on experience with Kafka for real-time data ingestion and processing. Proficiency in workflow orchestration tools like Apache Airflow. Strong understanding of Medallion Architecture and data lake best practices. Preferred / Nice-to-Have Skills Experience with Infrastructure as Code (IaC) using Terraform. Familiarity with additional AWS services such as SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Knowledge of monitoring and alerting tools like CloudWatch, Datadog, or Splunk. Understanding of data security best practices for data at rest and in transit. Qualifications BTech / MTech / MCA / MBA or equivalent. AWS certifications (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Rentokil PCI Rentokil PCI is the leading pest control service provider in India. A Rentokil Initial brand, Rentokil PCI was formed in 2017 through a joint venture (JV) between Pest Control India, the number one pest control company in India, and Rentokil, the world's leading pest control brand. Rentokil PCI aims to set new standards for customer service having operations across 300 locations in India. For more details: https://www.rentokil-pestcontrolindia.com Our Family Of Businesses Rentokil Pest Control is the world's leading commercial pest control company, operating in 70 countries and ranked in the top 3 in 65 of those countries. Ranking in the top 3 in 38 of the 44 countries we operate in, Initial Hygiene is the market leader who provides quality, diligent and friendly services to all customers. In France, Initial Workwear specialises in the supply and laundering of workwear, garments and protective uniforms and equipment; focussing on top quality products and services. Our plant business; Ambius is seen as the expert in interior and exterior "landscaping"; operating across the US, Europe, Asia & Pacific. Steritech Brand Protection by Rentokil Initial is an industry leader and pioneer, providing innovative solutions that help customers to mitigate risks and drive business growth. We also have specialist businesses such as Medical Services, Specialist Hygiene and Property Care, which lead their respective fields. Across all of our operations globally, we have a positive reputation amongst our customers for our knowledge and integrity. We have central support functions of Human Resources, IT, Finance, Legal and Marketing & Innovation in the Rentokil Initial Head Office locations and in country. Working within our functions departments, you would be supporting all of our businesses within India. Rentokil PCI is the leading pest control service provider in India. A Rentokil Initial brand, Rentokil PCI was formed in 2017 through a joint venture (JV) between Pest Control India, the number one pest control company in India, and Rentokil, the world's leading pest control brand. Rentokil PCI aims to set new standards for customer service with operations across 250 locations in India. The JV brand also focuses on developing industry-leading service operations through the sharing of best practices, new innovations and the use of digital technologies. General Duties & Responsibilities OE shall be owner of his / her service area in terms of all operations related actions and shall Execute daily service operations with a team of assigned Technicians within a given service areas Ensure quality of service delivery by effective supervision on technicians - on the job as per company SOPs Ensure technicians carry out treatment within a given Time on Site (ToS) in a competent manner. (OE to engage technician via route riding, training on the job). Plan & execute 02 TPAs (Technicians Performance Assessment) per assigned technician with 2 development programs per year. Coach & train assigned technicians in order to improve the service quality Convey special instructions, if any, to technicians to execute the job as per Service Docket (liaison with Sales colleagues) Carry out pest management Audits of customer sites as per agreed schedule by i or R auditor. Complete & close customer audit non-conformities (external / internal). Follow up & implement CAPA at customer site On Site Documentation: Implementation of SOP, Compliance and closure of audits non- conformities (Internal / External). Send service dockets of completed services to NKA for invoicing on time. Handle assigned customer complaints in his / her service areas, within 24 hours & resolve complaint at earliest, as per the customer's convenience and update the Root Cause in iCABS to ensure proper ticket closure Identify & resolve Service delivery issues in coordination with the Branch Manager Conduct daily 10 min stand up meeting & monthly operations meeting Be conversant with STP (Service Track Pest) and monitor, analyses visit extraction notes for all high infestation related service visits on daily basis & take action. Digital Initiative: Be conversant with all in house systems. Maintain Material consumption, Overtime Hrs. schedule at an agreed targeted level for the assigned service area and for technician Approval of conveyance amount for assigned technicians Monitor & report to ABM/BM on input costs at all major sites as per gross margin agreed & discuss action plans to bring it within limits Actively drive Service & Product Leads for assigned technician group within service area by implementing STA (See, Tell, Ask) and T.I.M.E. (Train, Incentivise, Monitor, Engage) on the job coaching to create density of customers. Innovation: conduct trials and report findings as per the guidelines, implement new service lines as per the SOPs. Minimum 18 customer visits per week for Resi & SA (Residential & Small Accounts segment heavy branch), includes, Customer complaints. For Specific Site based OEs -Number shall not be applicable but Retention of customer/s at site would be main KPI with all scheduled services completed efficiently and effectively. Inventory: Help ABM/BM to manage Inventory - coordinate with other Ops colleagues to manage Stock Levels of branches & forecasting, Indenting & receipt of material for branch (as an assigned function within branch by BM). Ensure APL (Approved Preparations List) is followed by all assigned technicians & all chemical containers have original labels Ensure proper schedule of maintenance & repairs of equipment is established & followed (via JOC). Promote highest grooming standards (uniform, Safety shoes, PPEs) Encourage technicians to plan their leaves in advance to curb absenteeism Help Resolve any Grievances & IR issues of Technicians & bring to the notice of ABM/BM on day today basis Report any deviation that could impact service quality or productivity of technicians like- over commitments, recommendation regarding night service (if it is not needed),covered area mismatch, etc Requirements Do you have what it takes? If you want to be considered for this role you will need: Minimum B. Sc. (Chemistry / Zoology / Agriculture). Any prior experience in operations of pest management or service industry is desirable. Proficient in use of computer applications & systems with Excel, Word, PowerPoint (or its equivalent) Should get well versed with various internal company systems such as iCABS, STP, iAuditor, SRA, SQA, SQS, TPA, myRentokilPCI, Service Leads App, U+ etc Benefits Are you interested? Here's what you can expect when you join us Attractive Base Salary Group Mediclaim Insurance Policy Travel Reimbursement Equal Opportunities Rentokil Initial believes in supporting all employees to provide equal opportunities and avoid discrimination. We also place emphasis on workplace diversity which means that we are serious about creating an inclusive environment that accepts each individual's differences, embraces their strengths and provides opportunities for all colleagues to achieve their full potential.

Posted 1 week ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Gurugram

Work from Office

Role Description Write and maintain build/deploy scripts. Work with the Sr. Systems Administrator to deploy and implement new cloud infra structure and designs. Manage existing AWS deployments and infrastructure. Build scalable, secure, and cost-optimized AWS architecture. Ensure best practices are followed and implemented. Assist in deployment and operation of security tools and monitoring. Automate tasks where appropriate to enhance response times to issues and tickets. Collaborate with Cross-Functional Teams: Work closely with development, operations, and security teams to ensure a cohesive approach to infrastructure and application security. Participate in regular security reviews and planning sessions. Incident Response and Recovery: Participate in incident response planning and execution, including post-mortem analysis and preventive measures implementation. Continuous Improvement: Regularly review and update security practices and procedures to adapt to the evolving threat landscape. Analyze and remediate vulnerabilities and advise developers of vulnerabilities requiring updates to code. Create/Maintain documentation and diagrams for application/security and network configurations. Ensure systems are monitored using monitoring tools such as Datadog and issues are logged and reported to required parties. Technical Skills Experience with system administration, provisioning and managing cloud infrastructure and security monitoring In-depth. Experience with infrastructure/security monitoring and operation of a product or service. Experience with containerization and orchestration such as Docker, Kubernetes/EKS Hands on experience creating system architectures and leading architecture discussions at a team or multi-team level. Understand how to model system infrastructure in the cloud with Amazon Web Services (AWS), AWS CloudFormation, or Terraform. Strong knowledge of cloud infrastructure (AWS preferred) services like Lambda, Cognito, SQS, KMS, S3, Step Functions, Glue/Spark, CloudWatch, Secrets Manager, Simple Email Service, CloudFront Familiarity with coding, scripting and testing tools. (preferred) Strong interpersonal, coordination and multi-tasking skills Ability to function both independently and collaboratively as part of a team to achieve desired results. Aptitude to pick up new concepts and technology rapidly; ability to explain it to both business & tech stakeholders. Ability to adapt and succeed in a fast-paced, dynamic startup environment. Experience with Nessus and other related infosec tooling Nice-to-have skills Strong interpersonal, coordination and multi-tasking skills Ability to work independently and follow through to achieve desired results. Quick learner, with the ability to work calmly under pressure and with tight deadlines. Ability to adapt and succeed in a fast-paced, dynamic startup environment Qualifications BA/BS degree in Computer Science, Computer Engineering, or related field; MS degree in Computer Science or Computer Engineering ( preferred)

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

The senior Product Manager holds the responsibility for designing, developing, and overseeing activities related to a specific product or a group of products. This oversight encompasses everything from defining the product and planning its development to production and go-to-market strategies. Additionally, the Product Manager is tasked with crafting the product roadmap necessary to achieve bookings, client NPS, and gross margin targets associated with their component. To facilitate organic growth, the product manager collaborates with internal stakeholders, clients, and prospects to identify new product capability requirements. They maintain close collaboration with their development teams to ensure the successful creation and introduction of these new capabilities to the market. Furthermore, the Product Manager takes charge of testing and implementing these fresh features with clients and actively promotes future growth to a broader audience of Clearwater clients and prospects. Responsibilities: Team Span: responsible for handling a team of 20-50 Developers. Prioritizes decisions across products. Establishes alignment on the product roadmap among multiple development teams. Exerts influence on shaping the company's roadmap. Efficiently leads the development of cross-product capabilities. Contributes to the formulation of the department's development and training plan. Advocates for a culture of communication throughout the organization. Is recognized as an industry expert and frequently represents CW on industry forum panels. Proficiently evaluates opportunities in uncharted territory. Independently identifies, assesses, and potentially manages partnership relationships with external parties. Delivers leadership and expertise to our continually expanding workforce. Required Skills: Domain Knowledge: Strong understanding of the alternative investments ecosystem, including (but not limited to) limited partnerships, mortgage loans, direct loans, private equity, and other non-traditional asset classes. AI / GenAI Exposure (Preferred): Experience in AI or Gen AI-based projects, particularly in building platforms or solutions using Generative AI technologies, will be considered a strong advantage. Proven track record as a Product Manager (Ideal but not vital) that owns all aspects of a successful product throughout its lifecycle in a B2B environment. Knowledge of investments and investment accounting (Very important). Exemplary interpersonal, communication, and project management skills. Excellent team and relationship building abilities, with both internal and external parties (engineers, business stakeholders, partners, etc.). Ability to work well under pressure, multitask, and maintain keen attention to detail. Strong leadership skills, including ability to influence via diplomacy and tact. Experience working with Cloud Platforms (AWS/Azure/GCP). Ability to work with relational and NoSQL databases. Strong computer skills, including proficiency in Microsoft Office. Excellent attention to detail and strong documentation skills. Outstanding verbal and written communication skills. Strong organizational and interpersonal skills. Exceptional problem-solving abilities. Education and Experience: Bachelor's/master's degree in engineering or a related field. 7+ years of relevant experience. Professional experience in building distributed software systems, specializing in big data and NoSQL. database technologies (Hadoop, Spark, DynamoDB, HBase, Hive, Cassandra, Vertica). Experience working with indexing systems such as elastic search, SOLR/Lucene. Experience working with messaging systems such as Kafka/SQS/SNS.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

This role is pivotal in enabling a self-service cloud platform that promotes speed, consistency, and innovation. By streamlining infrastructure delivery and supporting DevOps practices, the Senior Engineer fosters a culture of ownership, autonomy, and continuous improvement, ensuring the platform evolves in step with organizational growth and strategic goals. Senior Platform Engineer – Infrastructure as Code (AWS Cloud) The Senior Platform Engineer (IaC – AWS Cloud) will lead the design, implementation, and maintenance of scalable, secure, and reliable cloud infrastructure. This strategic role demands deep expertise in Terraform and advanced proficiency in AWS, with a strong focus on empowering engineering teams through automation and self-service capabilities. Key Responsibilities Design and deliver infrastructure and platform solutions using Terraform to enable iterative and incremental product development. Implement and enforce security, compliance, and architectural best practices through Infrastructure-as-Code across multi-environment setups. Build scalable, reusable infrastructure components that reduce operational overhead and accelerate development velocity. Partner with cross-functional teams to ensure infrastructure is resilient, high-performing, and adaptable to evolving business needs. Technical Expertise Advanced proficiency in Infrastructure as Code (IaC) using Terraform , ensuring consistency and repeatability. Deep understanding of AWS Cloud , and familiarity with Azure or Google Cloud services (compute, networking, storage, IAM, and security). Experience with CI/CD pipelines , automation frameworks, and GitOps workflows. Competency in Docker , Kubernetes , and container-based architectures. Strong grasp of cloud security practices , including secrets management, policy enforcement, and compliance automation. Skilled in monitoring and observability tools to ensure infrastructure health and reliability. Proficiency in scripting languages (Python, Bash, PowerShell) for automation and API integrations. Excellent communication and collaboration skills to effectively engage with engineering, product, and operations teams. Key Responsibilities & Outcomes Cloud Infrastructure Design & Automation Lead the development and implementation of AWS cloud infrastructure using Terraform, leveraging Infrastructure-as-Code (IaC) principles to create automated scripts and templates for provisioning, managing, and scaling cloud services such as CloudFront, CloudWatch, S3, SNS, and SQS—ensuring efficient and resilient operations. Scalability, Reliability & Security Assurance Drive the deployment of new AWS EC2 instances and supporting infrastructure by applying best practices in cloud migration, backup, disaster recovery (DR) failover, security patching, and system integration—ensuring optimal performance, scalability, and data integrity. Change & Incident Management Collaborate with the Optus release team to design and implement automated IaC-based release processes. Lead incident response and resolution initiatives for cloud infrastructure issues, minimizing service disruptions and reinforcing operational stability. Observability & Performance Optimization Proactively monitor and troubleshoot cloud infrastructure to detect early signs of failure. Identify and implement opportunities to enhance system performance, reduce operational costs, improve reliability, and strengthen overall security posture. Mentorship & Capability Building Coach and guide junior engineers in IaC methodologies and cloud architecture best practices. Foster technical excellence by building a scalable, secure, and high-performing platform infrastructure that supports critical applications and services. Qualifications & Experience Bachelor’s degree in Computer Science , Computer Engineering , Information Technology , or a related field. 2–3 years of hands-on experience in AWS Cloud Engineering with a focus on Infrastructure as Code (IaC) using Terraform . Proven track record in designing, implementing, and managing scalable, secure cloud infrastructure for Telco or digital enterprise environments. Technical & Professional Skills AWS Cloud Expertise Proficient in provisioning, automating, and troubleshooting AWS services including EC2, VPC, S3, IAM, Route53, CloudFront, SQS, and SNS. Deep understanding of AMI lifecycle automation using Systems Manager (SSM), CloudFormation, and Image Builder. Infrastructure as Code & Automation Advanced Terraform skills for building reusable, modular infrastructure. Expertise in CI/CD pipeline integration using tools such as Jenkins , GitLab CI , or AWS CodePipeline . Skilled in automated infrastructure testing and validation to ensure compliance, security, and stability. Containerization & Orchestration Experience designing immutable infrastructure using Docker and orchestrating with Kubernetes . Familiarity with modern microservices architectures and related deployment strategies. DevOps & GitOps Workflows Strong command of distributed version control systems (Git), with practical experience in GitOps workflows for managing infrastructure changes. Background in software release engineering including artifact management, environment promotion, and rollback strategies. Scripting & API Integration Proficiency in automation and tooling using Python , Bash , or PowerShell . In-depth knowledge of AWS APIs and SDKs for building custom integrations and operational tooling. Collaboration & Problem Solving Strong analytical, decision-making, and troubleshooting abilities. Excellent communication and cross-functional collaboration skills, with the ability to articulate technical concepts to diverse teams. Impact & Culture This role is pivotal in enabling a self-service cloud platform that promotes speed, consistency, and innovation. By streamlining infrastructure delivery and supporting DevOps practices, the Senior Engineer fosters a culture of ownership, autonomy, and continuous improvement—ensuring the platform evolves in step with organizational growth and strategic goals. ```

Posted 1 week ago

Apply

50.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details :- Position: AWS Lamda Backend Developer Experience Required: 8-10Yrs Notice: immediate Work Location: Hyderabad Mode Of Work: Hybrid Type of Hiring: Contract to Hire JD :- Required Skills: Extensive experience using key AWS services, Lambda, Fargate, SQS, SNS Contribute to the integration and configuration of (CI/CD) pipelines (preferably experience using Azure DevOps or GitHub) Proficiency in building back-end services and APIs using .Net Core with a focus on performance and scalability Experience using TypeScript, especially in AWS Lambda functions or microservices Experience with both SQL and NoSQL databases and understanding when to use each in various application contexts Prior experience with observability in cloud environments IaC (Infrastructure as Code) : Familiarity with IaC tools like SST.dev (preferred), Pulumi,Terraform, or AWS CDK to provision and manage infrastructure Familiarity with Agile workflows and tools like JIRA Can-Do Attitude : willing to step into frontend as needed Adaptability : Open-mindedness toward frameworks and technologies. You will be given room to experiment with technologies that suit the project, so a willingness to try new ideas is essential Collaboration : Strong communication skills to collaborate with cross-functional teams and contribute to technical discussions on architecture, tooling, and best practices Agility : Ability to thrive in a fast-paced, iterative environment, delivering new features and bug fixes in two-week sprints Desirable Skills:- : Prior experience working with Salesforce (preferred) or other CRM platforms Some previous experience with front-end development with Javascript/TypeScript. AWS Compute,storage, messaging: Lambda, Elastic Container Service (ECS), ECR, Fargate, S3, Aurora RDS, DynamoDB, EventBridge, SQS, SNS AWS Monitoring & Logging: CloudWatch, CloudTrail AWS Networking: Region, Route53, CloudFront, AppSync, Certificate Manager, VPC, WAF, Shield, API Gateway, Application Load Balancer, Internet Gateway, NAT Gateway AWS Security, Identity & Compliance: GuardDuty, Secrets Manager, AWS Config, KMS, Cognito, Macie, Inspector, IAM, WAF, Shield Experience with integration of Single Sign-On (SSO) solutions like OKTA and implementing security best practices.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role: Full Stack Engineer (.Net 8 and AWS Services) Location: Chennai Type: Full-time Key Responsibilities: Develop & maintain applications using .NET 8+ , C#, SQL/NoSQL, AngularJS, Web Components, StencilJS, and TypeScript. Build cloud-native solutions using AWS (SNS, SQS, Lambda, etc.). Write clean code, perform code reviews, and ensure optimal app performance. Collaborate across teams & document technical processes. Support legacy app management (training provided). Required Skills: Strong in .NET 8 and AWS Cloud services . Frontend expertise: Web Components, StencilJS, TypeScript, AngularJS. DB knowledge: SQL Server, MySQL, NoSQL. Experience with Bitbucket, microservices, Docker/Kubernetes, CI/CD, and DevOps. Familiarity with distributed systems, security practices, and a Shift-Left mindset. Excellent communication and problem-solving abilities.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies