Jobs
Interviews

1090 S3 Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

14.0 - 20.0 years

0 Lacs

maharashtra

On-site

As a Principal Architect - Data & Cloud at Quantiphi, you will bring your 14-20 years of experience in Technical, Solutioning, and Analytical roles to lead the way in architecting, designing, and implementing end-to-end data pipelines and data integration solutions for structured and unstructured data sources and targets. With a focus on Cloud platforms such as GCP, AWS, and Azure, you will be responsible for building and managing Data Lakes, Data Warehouse, Data Integration, Data Migration, and Business Intelligence/Artificial Intelligence solutions. Your role will involve understanding business requirements and translating them into functional and non-functional areas, defining boundaries in terms of Availability, Scalability, Performance, Security, and Resilience. You will leverage your expertise in various Data Integration and ETL technologies on Cloud, including Spark, Pyspark/Scala, Dataflow, DataProc, and more. Additionally, you will have the opportunity to work with traditional ETL tools like Informatica, DataStage, OWB, Talend, and others. Your deep knowledge of Cloud and On-Premise Databases such as Cloud SQL, Cloud Spanner, Big Table, RDS, Aurora, and others will be instrumental in architecting scalable data warehouse solutions on Cloud platforms like Big Query or Redshift. Moreover, your exposure to No-SQL databases and experience with data integration, storage, and data pipeline tool sets will be crucial in designing optimized data analytics solutions. Being a thought leader in architecture design and development of cloud data analytics solutions, you will collaborate with internal and external stakeholders to present solutions, support sales teams in building proposals, and lead discovery workshops with potential customers globally. Your role will also involve mentoring young talent, contributing to building Assets and Accelerators, and ensuring the successful delivery of projects on parameters of Schedule, Quality, and Customer Satisfaction. The position offers the experience of working in a high-growth startup in the AI, Decision Science, and Big Data Domain, along with the opportunity to be part of a diverse and proactive team that constantly raises the bar in translating data into tangible business value for clients. Additionally, flexible remote working options are available to foster productivity and work-life balance. If you are passionate about innovation, excellence, and growth, and enjoy working with a dynamic team of tech enthusiasts, Quantiphi is the place for you to shape your career in Data & Cloud architecture. Join us on our journey of digital transformation and be a part of creating impactful solutions that drive business success.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

Join a dynamic leader in the cloud data engineering sector, specializing in advanced data solutions and real-time analytics for enterprise clients. This role offers an on-site opportunity in India to work on cutting-edge AWS infrastructures where innovation is at the forefront of business transformation. The ideal candidate for this role is a professional with 4+ years of proven experience in AWS data engineering, Python, and PySpark. You will play a crucial role in designing, optimizing, and maintaining scalable data pipelines that drive business intelligence and operational efficiency. As part of your responsibilities, you will design, develop, and maintain robust AWS-based data pipelines using Python and PySpark. You will implement efficient ETL processes, ensuring data integrity and optimal performance across AWS services such as S3, Glue, EMR, and Redshift. Collaboration with cross-functional teams to integrate data engineering solutions within broader business-critical applications will be a key aspect of your role. Additionally, you will troubleshoot and optimize existing data workflows to ensure high availability, scalability, and security of cloud solutions. It is essential to exercise best practices in coding, version control, and documentation to maintain a high standard of engineering excellence. The required skills and qualifications for this role include 4+ years of hands-on experience in AWS data engineering with expertise in Python and PySpark. Proficiency in developing and maintaining ETL processes using AWS services like S3, Glue, EMR, and Redshift is a must. Strong problem-solving skills, deep understanding of data modeling, data warehousing concepts, and performance optimization are essential. Preferred qualifications include experience with AWS Lambda, Airflow, or similar cloud orchestration tools, familiarity with containerization, CI/CD pipelines, and infrastructure-as-code like CloudFormation and Terraform, as well as AWS certifications or equivalent cloud credentials. In this role, you will work in a collaborative, fast-paced environment that rewards innovation and continuous improvement. You will have opportunities for professional growth and skill development through ongoing projects and training. Additionally, you will benefit from competitive compensation and the ability to work on transformative cloud technology solutions.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

You will be working as a Data Platform Engineer based in Gurgaon, with a minimum experience of 4 years. In this role, you will be responsible for managing AWS environments, including ensuring high performance, security, and availability. Your expertise in AWS SysOps, AWS DMS for database migrations, data processing pipelines, and Infrastructure as Code (Terraform) will be essential for success in this position. Collaborating with various teams like data engineering, analytics, and DevOps will be crucial to deliver scalable solutions for enterprise-level data platforms. Your responsibilities will include designing, configuring, and maintaining AWS DMS, developing data workflows, implementing infrastructure as code using Terraform, monitoring system health, and ensuring compliance with security and disaster recovery best practices. To qualify for this role, you should have at least 4 years of experience in cloud infrastructure and data platform operations. Proficiency in AWS SysOps, AWS DMS, ETL/data processing pipelines, Terraform, and other AWS services like EC2, S3, RDS, IAM, CloudWatch, and Lambda is required. Strong troubleshooting, analytical, and communication skills are necessary. Experience with containerization, CI/CD pipelines, DevOps practices, and big data tools will be considered advantageous. A bachelor's degree in Computer Science, Information Technology, or a related field is preferred. This is a full-time onsite position at the Gurgaon office, where your expertise in AWS, data platforms, and infrastructure automation will play a vital role in delivering robust and scalable data solutions.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Engineer (AWS) at our company, you will be utilizing your 3-5 years of experience, with a minimum of 3 years specifically focused on AWS cloud services. Your main responsibilities will revolve around providing technical support for data engineering systems, including troubleshooting issues related to data pipelines, AWS services, Snowflake, Hadoop, and Spark. Your day-to-day tasks will include investigating and resolving issues in data pipelines, optimizing Spark jobs, managing incidents related to data processing, and collaborating with cross-functional teams to address critical issues promptly. You will also need to possess a solid understanding of big data architectures such as Hadoop, Spark, Kafka, and Hive. To excel in this role, you must have hands-on experience with Hadoop, Spark with Python on AWS, knowledge of Terraform templates for infrastructure provisioning, and at least 3 years of experience in BI/DW development with Data Model Architecture/Design. Your familiarity with CI/CD implementation, scheduling tools and techniques on Hadoop/EMR, and best practices in cloud-based data engineering and support will be highly beneficial. Your technical essentials should include proven experience in providing technical support for data engineering systems, a strong understanding of AWS services like S3, Glue, Redshift, EMR, Lambda, Athena, and Step Functions, as well as hands-on experience supporting Snowflake, Hadoop, Spark, and Python in a production environment. Additionally, you should possess excellent problem-solving skills, analytical abilities, and communication skills to effectively work with cross-functional teams. Preferred qualifications for this role include being an AWS Certified Solutions Architect Associate. As a self-motivated team player with strong analytical skills and effective communication abilities, you will thrive in our dynamic and passionate work environment. If you are looking to work with a team of enthusiastic professionals and enjoy continuous growth, this position is perfect for you.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

We are seeking a customer-obsessed, analytical Senior Staff Engineer to take charge of the development and expansion of our Tax Compliance product suite. As a key member of our team, you will be instrumental in creating cutting-edge digital solutions that streamline and automate tax filing, reconciliation, and compliance processes for businesses of all sizes. Join our rapidly growing company and immerse yourself in a dynamic and competitive market, where you can play a pivotal role in helping businesses fulfill their statutory obligations efficiently, accurately, and confidently. In this role, you will have the following key responsibilities: - Lead a high-performing engineering team or serve as a hands-on technical lead. - Spearhead the design and implementation of scalable backend services using Python. - Utilize your expertise in Django, FastAPI, and Task Orchestration Systems. - Take ownership of and enhance our CI/CD pipelines through Jenkins to ensure swift, secure, and dependable deployments. - Architect and oversee infrastructure leveraging AWS and Terraform with a DevOps-centric approach. - Collaborate closely with product managers, designers, and compliance specialists to deliver features that enhance the seamless tax compliance experience for our users. - Demonstrate proficiency in containerization tools like Docker and orchestration with Kubernetes. - Background knowledge in security, observability, or compliance automation. To be successful in this role, you should meet the following requirements: - Possess over 5 years of software engineering experience, with a minimum of 2 years in a leadership or principal-level position. - Demonstrate deep proficiency in Python/Node.js, encompassing API development, performance enhancement, and testing. - Experience in Event-driven architecture, Kafka/RabbitMQ-like technologies. - Strong familiarity with AWS services such as ECS, Lambda, S3 RDS, and CloudWatch. - Solid grasp of Terraform for managing infrastructure as code. - Proficiency in Jenkins or similar CI/CD tools. - Capable of effectively balancing technical leadership with hands-on coding and creative problem-solving. - Excellent communication skills and a collaborative approach. This opportunity has been shared by Parvinder Singh from Masters India.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Frontend Engineer at CodeChavo, you will be a key player in building world-class responsive web applications. Your passion for user experience and real-time performance will drive you to excel in this fast-paced startup environment. Your responsibilities will include developing responsive and high-performance frontend applications using React.js and TypeScript. You will implement real-time features and messaging experiences using WebSockets / Socket.io. Collaborating closely with design and product teams, you will translate product requirements into scalable UI components. Leading by example, you will guide frontend architecture, conduct code reviews, mentor junior developers, and foster a strong engineering culture. Taking ownership of end-to-end delivery, you will handle development, deployment, monitoring, and error tracking. Rapid prototyping and vibe-based coding will be your tools to deliver fast without compromising on code quality. Your contribution to project planning and estimation will be crucial as you collaborate cross-functionally to meet release timelines. Applying a strong sense of design and UI aesthetics, you will ensure polished, accessible, and user-friendly interfaces. To be successful in this role, you should have at least 5 years of frontend development experience, preferably in SaaS or high-scale applications. Deep expertise in React.js, state management tools like Redux or Zustand, and real-time communication technologies is essential. Proficiency in modern frontend tooling, frontend security practices, Git workflows, and DSA foundation will be beneficial. If you are ready to make a real impact in the digital transformation space and contribute to building quality tech teams, we would love to have you on board at CodeChavo.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

You are a Dot Net Full Stack Developer with expertise in React.js and Node.js technologies. Your role involves designing, developing, and maintaining web applications with a strong focus on frontend and backend services. You will collaborate with UX/UI designers to create user-friendly interfaces and integrate RESTful APIs for seamless communication between systems. Utilizing RDBMS for data storage and retrieval, you will ensure optimal database design and performance. Your responsibilities also include deploying applications on AWS services like EC2, S3, and Lambda, and writing efficient GQL queries for interacting with graph databases. Integration of message systems such as Kafka for real-time data processing is a key aspect of your job. As a Full Stack Developer, you will participate in code reviews, unit testing, and debugging to maintain code quality. Your qualifications include a Bachelor's degree in computer science, proven experience in React and REST APIs, and knowledge of RDBMS technologies. Familiarity with AWS services, GQL, message systems like Kafka, and proficiency in using AI tools like GitHub Copilot are essential for this role. Strong problem-solving skills, the ability to work independently and collaboratively, and excellent communication skills are important attributes for success in this position. If you are interested in this opportunity, please share your updated resume with hema.g@s3staff.com.,

Posted 1 month ago

Apply

2.0 - 5.0 years

0 - 0 Lacs

bangalore

On-site

Senior Software Engineer (Python) 2-5yrs Exp Bangalore 6-9L CTC Job Category: IT & Software Job Type: Full Time Job Location: Bangalore Salary: 9-13LPA Years of Experience: 2-5 years Job Description Job Summary: We are seeking a Python Engineer with strong experience in AWS cloud services to join our engineering team. You will design, develop, and deploy scalable backend systems and data-driven services, leveraging modern cloud-native architectures. Ideal candidates are highly proficient in Python and have hands-on experience with key AWS services such as Lambda, S3, DynamoDB, API Gateway, and mo Key Responsibilities: Develop and maintain backend services, microservices, and APIs using Python. Design and implement cloud-native applications on AWS , ensuring scalability and high availability. Work with AWS Lambda, API Gateway, S3, DynamoDB, CloudWatch, IAM, etc. Build and optimize data processing pipelines (e.g., using Python, Glue, or Step Functions). Integrate third-party APIs and design secure, efficient interfaces. Collaborate with DevOps to implement CI/CD and infrastructure-as-code (e.g., using Terraform or AWS CDK). Write unit, integration, and performance tests for backend components. Participate in code reviews, architecture discussions, and sprint planning. Requirements Required Skills & Qualifications: 36 years of professional software development experience with Python. Strong understanding of RESTful APIs, microservices, and asynchronous programming . Minimum 2 years of hands-on experience with AWS : Must have used Lambda, S3, DynamoDB, API Gateway in production. Familiarity with IAM, VPC, CloudWatch, CloudFormation/Terraform . Experience working with databases (SQL and NoSQL). Solid grasp of software engineering principles , Git, version control workflows. Strong communication skills and ability to collaborate in agile teams. Nice-to-Have Skills: Experience with Docker and container orchestration (ECS, EKS). Exposure to data engineering tools like AWS Glue, Athena, Step Functions. Experience with event-driven architectures (e.g., SNS, SQS, Kinesis). Familiarity with CI/CD pipelines (e.g., GitHub Actions, CodePipeline, Jenkins). Knowledge of security best practices in cloud applications. Open for UK shifts Education: Bachelors or Masters degree in Computer Science, Engineering, or related field.

Posted 1 month ago

Apply

1.0 - 5.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As a CloudOps Engineer in the Information Technology department, you will be responsible for managing and maintaining the AWS/Azure cloud platform. Your duties will include configuring Cloud Services, designing enterprise systems, and administering cloud environments using Console & CLI. You will work closely with the business to define Cloud system specifications and ensure effective support activities are in place. Your role will involve understanding Architecture requirements, managing highly scalable architectures with multiple DR/Production Client environments, and utilizing various AWS services such as EC2, VPC, ELB, S3, Cloud Watch, Event Bridge, SNS, IAM, Cloud Front, and Lambda. You will also be responsible for managing three-tier architectures, Security Groups, Auto Scaling, Load Balancers, and Bastion hosts securely. Additionally, you will create and manage AMIs, Snapshots, Volumes, and upgrade/downgrade AWS resources as needed. Monitoring resources like EC2 instances and load balancers, configuring Cloud watch alarms, and managing AD servers for user access will be part of your daily tasks. You will also handle lifecycle policies, VPC peering, flow logs, monthly patching, and user/role management within the AWS environment. This is a full-time, permanent position with benefits including health insurance. The work schedule is during the day shift, and the preferred experience includes at least 1 year of AWS CloudFormation experience. The work location is in person. If you are a proactive individual with a strong technical background in cloud operations and a passion for ensuring the reliability and security of cloud environments, this role offers the opportunity to contribute significantly to the success of the organization.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As a hands-on and strategic Solution/Technical Architect, you will be responsible for leading the design and implementation of AWS-based integration solutions for large-scale enterprise environments. Your focus will be on building scalable, secure, and high-performing architectures to connect ERP systems (JD Edwards), eCommerce platforms, PIM/DAM ecosystems, and external partners using AWS-native services, real-time APIs, and EDI workflows. You will collaborate closely with offshore and onshore teams to deliver modern integration solutions with architectural rigor and excellence. Your key responsibilities will include designing and implementing integration architectures on AWS, including data pipelines, APIs, and event-driven workflows. You will lead real-time and batch integrations across ERP (JDE), eCommerce, PIM/DAM, and external partner systems, utilizing AWS-native services such as API Gateway, Lambda, EventBridge, Step Functions, SQS/SNS, and S3 to build modular, cloud-native solutions. Additionally, you will define interface specifications, architecture diagrams, deployment models, and technical documentation to ensure that technical solutions meet performance, scalability, security, and compliance standards. Supporting EDI-style integrations using custom services or third-party tools hosted on AWS will also be within your responsibilities, aligning with Boomi, MuleSoft, or Informatica-style EIB architecture patterns. To be successful in this role, you must have at least 10 years of experience in solution architecture and enterprise integration, with a minimum of 3 years of strong hands-on experience with AWS. Deep expertise in AWS-native integration services such as Lambda, Step Functions, API Gateway, EventBridge, CloudWatch, S3, and IAM is essential. Proven experience in integrating with JD Edwards (JDE) and commerce platforms, a solid understanding of event-driven architecture, serverless patterns, and API lifecycle management, and strong knowledge of message transformation (XML, JSON, XSLT), asynchronous messaging, and security enforcement are also required. Excellent communication skills and the ability to document and present technical solutions clearly are crucial for this role. In return, we offer you a lead architecture role on mission-critical AWS-based integration programs, exposure to complex enterprise modernization projects across retail and digital commerce, a clear career path toward Enterprise Architect or Platform Practice Lead roles, and an engineering-first culture focused on quality, innovation, and platform excellence.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Node Developer at Starberry, a decentralised global digital media agency with a specialization in UX/UI, website design & development, and digital marketing, you will be responsible for building high-performance, robust, and scalable web components for our applications. We have a presence in London, UK, and a hub in Coimbatore, India, and we are seeking individuals with 3-7 years of experience in this field. In this role, you will work both independently and collaboratively, demonstrating strong organizational skills and the ability to manage deadlines with minimal supervision. You will be part of a dynamic team that utilizes a tech stack including NodeJS (Express JS), NoSQL (MongoDB), RDBMS (MySQL), React JS, Redux, GraphQL, CDN (S3), Webpack, AWS, Git, and Headless CMS (Preferred: Strapi), among others. Familiarity with Gatsby, Netlify Deployment, JamStack methodology, DynamoDB, and PHP is considered a plus. Your responsibilities will include contributing high-quality code to a fast-growing React - Node JS team, collaborating with developers, product managers, designers, and stakeholders to deliver robust software solutions in an agile environment, and providing custom NPM packages. You will also be expected to ensure code standards and best practices are implemented, participate in peer code reviews, and contribute to project planning and technical decision-making. To excel in this role, you must have a very strong knowledge and extensive experience with Node JS, PHP, MySQL, single-page JavaScript applications with large codebases, NoSQL (MongoDB), GraphQL, and AWS services such as S3, Lambda, and Dynamo DB. Proficiency in Git, Github, and version control workflows is essential, as well as business-level competency in both written and spoken English due to the international nature of the team. Desired skills include knowledge of CI/CD pipelines and associated tooling, as well as familiarity with Github Actions. If you are a proactive individual who thrives in a fast-paced environment and possesses the required technical expertise, we encourage you to apply for this exciting opportunity at Starberry.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Junior Software Engineer at SmartBear, you will play a crucial role in the transformation of QMetry Test Management for Jira. Your responsibilities will involve solving intricate business problems and developing highly scalable applications that offer exceptional user experiences. Reporting to the Lead Engineer, you will be tasked with designing, documenting, and implementing new systems in Java 17/21. Working closely with the engineering team, you will develop backend services and REST APIs using Java, Spring Boot, and JSON. Your role will require you to write code according to product requirements, create new products, conduct automated tests, and contribute to system testing within an agile development environment. Effective communication with both business and technical stakeholders will be essential to deliver high-quality products that meet business expectations. You will need to possess hands-on experience with Java 17 or higher, and a Bachelor's Degree in Computer Science, Computer Engineering, or a related field. The ideal candidate should have 2-4 years of relevant experience, proficiency in API-driven development, and a strong understanding of OOPs, Java, Spring Framework, and JPA. Experience with relational databases such as MySQL, PostgreSQL, MSSQL, Oracle, and familiarity with AWS services, Docker, GitHub, and Agile methodologies is highly desirable. Prior exposure to Atlassian suite of Products and SCRUM environment will be advantageous. Joining the SmartBear crew offers you the opportunity to grow your career, work in a supportive and inclusive environment, and have a positive impact on the tech-driven world. SmartBear values ethical corporate practices, social responsibility, and diversity within its teams. With headquarters in Somerville, MA, SmartBear has a global presence and has been recognized with various industry awards for its innovative products and company culture.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a skilled developer with 5-8 years of experience, you will be responsible for developing, updating, and maintaining applications to meet specified requirements, scale efficiently, and ensure high performance. Your role will involve analyzing project requirements, designing effective solutions within the broader product architecture, and deploying APIs and web services with reusable, testable, and efficient code. You will implement low-latency, scalable applications with optimized performance and create Docker files for containerization, deploying applications within a Kubernetes environment. Your ability to adapt quickly to a dynamic, start-up style environment, demonstrate strong problem-solving skills, and a resourceful approach will be key to driving results. Your qualifications should include proficiency in Python, particularly with Fast API/Flask, along with familiarity with other web frameworks like Django and web2py. Deep understanding of RESTful API design, HTTP, JSON, database expertise in RDBMS and document-based databases, design patterns, and best practices, containerization, orchestration, scalable architecture knowledge, as well as unit testing and quality assurance are essential. You should also be proficient with Git for source code management and collaborative development. In addition to technical skills, hands-on experience in ETL processes, data pipelines, cloud services (especially AWS), microservices architecture, and CI/CD tools will be valuable. Working on technical challenges with global impact, self-development opportunities, sponsored certifications, tech talks, hackathons, and a generous benefits package including health insurance, retirement benefits, flexible work hours, and more are some of the reasons why you will love working with us. This role offers you an exciting opportunity to contribute to cutting-edge solutions and advance your career in a dynamic and collaborative environment.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You will be joining a global technology consulting and digital solutions company that empowers enterprises to innovate and enhance business models through digital technologies. With a team of over 84,000 professionals in more than 30 countries, the company serves 700+ clients by leveraging its domain and technology expertise to drive competitive differentiation, customer experiences, and business outcomes. As a Dotnet Backend + AWS professional, you will be based in Hyderabad (Rai Durg) or Gurgaon (Ambience Island, DLF Phase 3). The ideal candidate should have 6 to 8 years of experience and will be hired on a Contract to Hire basis with a Hybrid work mode. Immediate joiners are preferred for this role. Your core skills should include proficiency in C# programming language, experience in developing and integrating RESTful APIs using ASP.NET Web API, and the ability to design and integrate RESTful APIs. Additionally, you should have a basic understanding of AWS and GenAI technologies. In terms of secondary requirements, familiarity with Cloud (AWS) services such as AWS Lambda, API Gateway, S3, DynamoDB, RDS, SQS, SNS, as well as GenAI technologies like prompt engineering, consuming GenAI APIs, and understanding LLM use cases in enterprise apps would be beneficial. Experience with DevOps tools like GitHub for version control, GitHub Actions for CI/CD, Docker (optional but valuable), and logging & monitoring tools like CloudWatch will also be advantageous for this role.,

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a Lead Full-Stack Developer with a minimum of 10 years of experience in React, Node.js, and Angular, you will be responsible for working on projects within the HEALTHCARE/PHARMA Industry. Your primary duties will include: - Engaging in Frontend development utilizing React (v17) and Angular - Participating in Backend development using Node.js and Express.js - Collaborating with MySQL and Redis for data storage and caching - Understanding and implementing RESTful APIs - Gaining exposure to AWS services such as EC2, S3, and Lambda - Acquiring knowledge in AI/ML fundamentals, particularly in NLP for resume/job matching - Familiarizing yourself with Microservices architecture, CI/CD pipelines, Docker, and Kubernetes as additional learning opportunities The ideal candidate for this role should possess: - Basic proficiency or practical experience with React, Angular, and Node.js - Demonstrated curiosity and enthusiasm for full-stack development and cloud computing - A solid grasp of JavaScript, HTML, CSS, and fundamental API usage - A strong willingness to learn rapidly, show initiative, and collaborate effectively in a team environment If you meet these qualifications and are excited about this opportunity, please submit your profile to sruthi.dharmavaram@yardglobalsolutions.com.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As the Technical Marketing Engineer (TME) at our company, you will play a crucial role as the technical expert for our products. Your responsibilities will include creating engaging demos, developing technical content, supporting field teams, and validating use cases across various partner solutions and customer environments. Collaboration with Product Management, Product Marketing, Engineering, and Sales Engineering will be essential to ensure our solutions are not only competitive from a technical standpoint but also well-understood. Your key responsibilities will include: - Designing and managing lab and cloud-based demo environments for Quantum Myriad and ecosystem solutions like Veeam, Commvault, AI/ML pipelines, and editing workflows. - Creating repeatable customer-facing demos to assist field teams, events, webinars, and partner engagements. - Authoring technical white papers, solution briefs, competitive positioning guides, FAQs, and reference architectures. - Collaborating with Product Marketing to develop launch content, videos, and hands-on tutorials. - Testing and documenting joint solutions with key ISVs in areas such as backup, MAM, HPC, analytics, and AI/ML. - Building and validating technical integrations with third-party tools and platforms. - Providing training and technical guidance to field SEs, partners, and support teams. - Assisting in competitive and customer-facing presentations while serving as a technical expert. - Acting as a technical bridge between the field and engineering to provide insights into customer needs, performance tuning, and feature enhancement opportunities. - Capturing insights from the field to influence product enhancements and roadmap direction. - Collaborating with engineering and product teams to prioritize features and address technical challenges. The ideal candidate will possess: - 5+ years of experience in technical marketing, solutions engineering, or technical presales roles in the storage or data management industry. - Strong knowledge of enterprise file and object storage systems, protocols (NFS, SMB, S3), and performance tuning. - Hands-on experience in system configuration in lab environments or cloud-based POCs. - Understanding of ecosystem technologies like data protection (Veeam, Commvault), media workflows (Adobe, Avid, Resolve), and analytics/AI frameworks. - Excellent communication skills with the ability to create clear, audience-appropriate content and demos. Preferred qualifications include experience with Quantum products or similar platforms, familiarity with high-throughput, low-latency storage use cases, knowledge of scripting or automation (Bash, Python, Terraform), and understanding of hybrid and multi-cloud architectures, Kubernetes, and software-defined storage. About Quantum: Quantum is a leading provider of end-to-end data management solutions, enabling customers to capture, store, protect, and archive their valuable data throughout its lifecycle. Our flagship offering, Quantum Myriad, is an all-flash, scale-out NAS optimized for high-performance workloads and is trusted by media, enterprise, HPC, and analytics organizations globally.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

The Lead - Database Administrator position at Fidelity Brokerages involves managing Db2 on z/OS Platform, Aurora PostgreSQL, and Aerospike on AWS. In this role, you will be responsible for providing Application Database and Replication support for a large DB2 z/OS sysplex environment and Application Database and Data Migration support for a cloud-based database infrastructure. You will also handle SQL tuning and performance on Db2 and PostgreSQL, maintenance of Databases, Utilities management, and providing 12x7 on-call support. Additionally, you will be involved in Database Engineering tasks. Key Skills required for this role include extensive experience in DB2 z/OS Database Administration in a large complex Database Environment, proficiency in SQL, JCL, COBOL, and familiarity with Qrep and IIDR replication products. Knowledge of DB2 LUW on LINUX/UNIX, experience with CA Platinum tools, and expertise in distributed technologies and AWS services such as Aurora PostgreSQL are also essential. Your work as a Lead - Database Administrator significantly impacts the organization by ensuring the smooth functioning of databases and replication processes. The ideal candidate should hold a BE/MCA degree with 2-5 years of industry experience. The location for this position is Bangalore - EGL/Manyata, with shift timings from 8:30 am to 5:00 pm. Certifications in Information Technology will be an added advantage for this role.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

The opportunity available at EY is for the position of Senior Consultant - Application Engineer. As a Senior Consultant, your main responsibilities will revolve around providing senior-level system analysis, design, development, and implementation of applications and databases, as well as integrating third-party products. You will be tasked with translating technical specifications into code for complex new or enhancement projects for internal clients, writing programs, developing code, testing artifacts, and producing reports. It is essential to employ software development techniques that ensure tests are implemented in a manner that supports automation. In this role, you will lead backend system optimization efforts to ensure high performance, scalability, and responsiveness under varying workloads. Your responsibilities will also include spearheading the integration of multiple data sources, APIs, and services to streamline data flow and improve overall system efficiency. Furthermore, you will oversee the creation and maintenance of detailed backend documentation to aid cross-functional teams in understanding architecture and logic. Advocating and implementing best practices for security, data protection, and backend resilience will be crucial, aligning with enterprise-grade standards. Collaboration with front-end teams to design backend interfaces that support seamless and intuitive user experiences is also a key aspect of the role. You will be expected to elevate code into the development, test, and production environments on schedule, provide follow-up production support, submit change control requests, and document accordingly. It is important to thoroughly understand software development methodology, development architecture standards, and train and mentor staff with less experience, resolving elevated issues as they arise. Participating in design, code, and test inspections throughout the life cycle to identify issues, explaining technical considerations at related meetings including those with internal clients, and performing systems analysis activities are all part of the responsibilities. It is crucial to have a broad understanding of Vanguard's technologies, tools, and applications, including those that interface with business areas and systems. Additionally, you will interface with cross-functional team members, communicate system issues at the appropriate technical level for each audience, and ensure compliance with Information Technology and Information Security policies and procedures. You may be required to participate in special projects and perform other duties as assigned. To qualify for this role, you should have a minimum of 5 years of experience in software development, system integration, database design, and back-end architecture, along with a Bachelor's degree in relevant fields such as Computer Science, or a Master's degree/Diploma in Computer Science. Strong communication, facilitation, relationship-building, presentation, and negotiation skills are essential, along with the ability to work with cross-functional teams. In addition to technical skills, expertise in server-side development, system integration, and API management, as well as knowledge of database design, optimization, debugging, and performance tuning are required. Must-have skills for this role include Java, Spring Boot, Angular, Node JS, AWS, CloudFormation, ECS, Fargate, DynamoDB, S3, Lambda, IAM, SQL queries, GitHub, Bitbucket, Bamboo, CI/CD, TDD/BDD, Splunk, Honeycomb, Python. Knowledge of Agile, Confluence, JIRA, and aptitude for continuous process improvement for process flows/workflows are also beneficial. An AWS Cloud Practitioner certification is preferred, and familiarity with Adobe Experience Manager and an AWS Cloud background would be advantageous. EY is committed to being an inclusive employer and offers flexible working arrangements to achieve the right balance for its people. The organization believes in providing opportunities for career growth while accommodating personal priorities. As a global leader in assurance, tax, transaction, and advisory services, EY fosters a culture that encourages training, opportunities, and creative freedom to make a positive impact in the world.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Data Engineer responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a Data Engineer, you should have experience with hands-on experience in pyspark and strong knowledge of Dataframes, RDD, and SparkSQL. Additionally, experience in developing, testing, and maintaining applications on AWS Cloud is crucial. A strong hold on the AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake formation, Athena) is essential. Designing and implementing scalable and efficient data transformation/storage solutions using Snowflake, as well as data ingestion to Snowflake for different storage formats such as Parquet, Iceberg, JSON, CSV, etc., are key requirements. Experience in using DBT (Data Build Tool) with Snowflake for ELT pipeline development and writing advanced SQL and PL SQL programs is necessary. Moreover, hands-on experience in building reusable components using Snowflake and AWS Tools/Technology is expected. Exposure to data governance or lineage tools such as Immuta and Alation, as well as experience in using Orchestration tools such as Apache Airflow or Snowflake Tasks, are considered advantageous. Knowledge of the Abinitio ETL tool is a plus. Some other highly valued skills may include the ability to engage with stakeholders, elicit requirements/user stories, and translate requirements into ETL components. Understanding the infrastructure setup and providing solutions either individually or working with teams is important. Good knowledge of Data Marts and Data Warehousing concepts, possessing good analytical and interpersonal skills, and implementing Cloud-based Enterprise data warehouse with multiple data platforms along with Snowflake and NoSQL environment to build a data movement strategy are also valued skills. The role is based out of Chennai. Purpose of the role: To build and maintain the systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete, and consistent data. Design and implementation of data warehouses and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise. They lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviors are: L Listen and be authentic, E Energize and inspire, A Align across the enterprise, D Develop others. OR for an individual contributor, they develop technical expertise in the work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for the end results of a team's operational processing and activities. Escalate breaches of policies/procedure appropriately. Take responsibility for embedding new policies/procedures adopted due to risk mitigation. Advise and influence decision-making within their area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation, and codes of conduct. Maintain and continually build an understanding of how your sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex/sensitive information. Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

ahmedabad, gujarat

On-site

You have a minimum of 8 years of experience as a full-stack developer, particularly focusing on large web applications. Your responsibilities will include designing and developing RESTful services, APIs, and back-end servers using NodeJS. It is essential to have a strong command of OOPS concepts and hands-on experience with different NoSQL and SQL databases. In this role, you will be required to have solid experience with Angular, as well as hands-on experience with JavaScript, various JS standards, Object-Oriented JavaScript, TypeScript, and jQuery. Knowledge of web technologies and UI/UX standards is crucial to ensure writing ESLint error-free code. Experience with unit test case frameworks like Jest, Jasmine, Karma, Mocha Chai, and understanding of TDD & BDD used in test-frameworks are necessary. Additionally, familiarity with cloud platforms such as AWS, Microsoft Azure, and GCP is required. Knowledge of basic AWS services like EC2, Lambda, Route53, CloudFront, API Gateway, and S3 (or equivalent Azure and GCP services) is expected. You should be familiar with modern application deployment practices, including continuous integration and deployment, configuration management, etc. Tools like Jenkins, Docker, Kubernetes, etc., should not be new to you. Understanding various web servers like Tomcat, Nginx, etc., will be advantageous. Your role will involve designing and developing data transmission/connectivity between multiple components using different protocols such as Web Sockets or MQTT channels. Proficiency in code versioning tools like Git or TFS and version control repositories such as Bitbucket, GitHub, GitLab is necessary. Moreover, familiarity with project management tools like Atlassian JIRA, and experience in selecting technologies, setting technical directions, estimating/planning, and directing developers are key aspects of this position. Working with cross-functional teams to define project requirements and collaborating with Project Managers, Scrum Masters, and Product Managers to define sprint capacity will be part of your responsibilities. Lastly, your duties will also include production system support and being the Release Owner for Staging & Production environments.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You should have experience in core AWS and GCP services along with good technical skills. Your responsibilities will include troubleshooting problems related to Web Applications and databases across multiple systems. It is essential to have a strong background in networking and operating systems. Key Requirements: - Must have experience with Kubernetes, Docker, and Linux. - Understand the current application infrastructure. - Experience with core AWS services such as AWS EC2, VPC, S3, Kubernetes, Load Balancer, AWS Cloud Watch, Code Deploy, AWS Inspector, and similar technologies in GCP. - Proficiency in setting up administrator and service accounts, and thorough understanding of monitoring AWS instances and services. - Hands-on experience in working with AWS and GCP through Command Line Interface (CLI) and management console. - Knowledge and experience in architecting and configuring Virtual Private Clouds (VPCs). - Experience with Terraform and Kubernetes. - Proficient in monitoring and auditing systems. - Understanding of networking concepts including DNS, TCP/IP, and firewalls. - Manage highly private AWS and GCP clouds while ensuring security. - Build and manage automated infrastructure using open source tools. - Experience with Windows/Linux operating systems and storage technologies. - Hands-on experience with Microsoft Active Directory, DNS, and Group Policies. - Strong troubleshooting and analytical skills related to Web Applications and Databases. - Ability to estimate usage costs and implement operational cost control mechanisms. - Previous work experience in small or medium production Cloud environments. - Relevant experience as a cloud Engineer. This job requires a deep understanding of AWS and GCP services, strong problem-solving skills, and the ability to work effectively in fast-paced environments.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

Genpact is a global professional services and solutions firm committed to delivering outcomes that shape the future. With a team of 125,000+ individuals spread across 30+ countries, our driving forces include innate curiosity, entrepreneurial agility, and the desire to create lasting value for our clients. Fueled by our purpose of the relentless pursuit of a world that works better for people, we serve and transform leading enterprises worldwide, including the Fortune Global 500. Our expertise lies in deep business and industry knowledge, digital operations services, and proficiency in data, technology, and AI. We are currently seeking applications for the role of Senior Principal Consultant - QA Engineer! Responsibilities: - Design and develop a comprehensive test plan, test cases, and test scenarios based on functional and non-functional requirements. - Manage the test case life cycle. - Execute and analyze manual and automated tests to identify defects and ensure the quality of software applications. - Collaborate closely with development teams to ensure alignment of test cases with development goals and timelines. - Work with cross-functional teams to ensure adequate testing coverage and effective communication of test results. Key Qualifications: Minimum Qualifications: - Strong technical knowledge of SQL, ETL Testing, and proficient in writing testing scripts in Python for validating functionality, creating automation frameworks, and ensuring performance and reliability of data systems. - Deep understanding of the data domain, encompassing data processing, data storage, and data retrieval. - Strong collaboration, communication skills, and analytical skills. - Experience in reviewing system requirements and tracking quality assurance metrics such as defect densities and open defect counts. - Proficiency in creating and enhancing the integration of CI/CD pipelines. - Experience in Agile/Scrum development processes. - Some exposure to performance and security testing. - Hands-on experience in test execution using AWS services, technically proficient in services like MKS, EKS, Redshift, S3. Job Details: - Job Title: Senior Principal Consultant - Primary Location: India-Gurugram - Schedule: Full-time - Education Level: Bachelor's / Graduation / Equivalent - Job Posting: Sep 18, 2024, 4:28:53 AM - Unposting Date: Oct 18, 2024, 1:29:00 PM - Master Skills List: Digital - Job Category: Full Time,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

At Lilly, the global healthcare leader headquartered in Indianapolis, Indiana, we are dedicated to uniting caring with discovery to make life better for people around the world. Our 35,000 employees are committed to discovering and delivering life-changing medicines, improving disease management, and contributing to our communities through philanthropy and volunteerism. At Lilly, we prioritize putting people first and giving our best effort to our work. The LCCI Tech Team is currently seeking a motivated leader for the Enterprise Data Organization. This individual will play a crucial role in optimizing data extraction, enhancing data infrastructure, and aligning operations with organizational objectives. The ideal candidate will be responsible for driving operational excellence, ensuring compliance, and fostering collaboration across teams and stakeholders. As a senior leader, this individual will serve as a strategic business partner in advancing data and analytics services to provide business-driven insights and maintain a competitive edge. The Enterprise Data organization has developed an integrated data and analytics platform that enables Lilly team members to efficiently ingest, transform, consume, and analyze data sets across various environments. Contributors can easily prepare, analyze, and publish new data sets for the benefit of others within the organization. Key responsibilities for this role include designing, building, testing, and deploying high-performance and scalable data pipelines and consumption solutions in AWS. This involves ensuring optimal storage, retrieval, and processing of data, as well as maintaining data integrity, security, and privacy. The successful candidate will also be tasked with developing reusable components to accelerate enterprise data delivery, conducting comprehensive system testing, and actively participating in SAFe Agile framework ceremonies. Basic qualifications for this position include a Bachelor's degree in computer science, information technology, management information systems, or related fields, along with 6 to 10 years of development experience in tools such as SQL, Python, and AWS services. Additionally, candidates should have exposure to Agile development, code deployment using Github and CI-CD pipelines, and experience in data design, modeling, and management. In addition to technical expertise, strong commitment to good software design principles, understanding of cloud and hybrid data architecture concepts, excellent written and oral communication skills, and a willingness to learn are essential attributes for prospective candidates. If you are passionate about leveraging technology to drive innovation in the healthcare industry and are committed to making a meaningful impact, we invite you to join us at Lilly. Together, we can continue doing extraordinary things and making a difference in the lives of people worldwide. Lilly is an equal opportunity employer dedicated to creating an inclusive and diverse workforce. We are committed to providing individuals with disabilities equal opportunities for employment. If you require accommodation during the application process, please complete the accommodation request form on our website for further assistance.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative, and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow, informed and validated by science and data, superpowered by creativity and design, all underpinned by technology created with purpose. Your role involves having IT experience with a minimum of 5+ years in creating data warehouses, data lakes, ETL/ELT, data pipelines on cloud. You should have experience in data pipeline implementation with cloud providers such as AWS, Azure, GCP, preferably in the Life Sciences Domain. Experience with cloud storage, cloud database, cloud Data Warehousing, and Data Lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, S3 is essential. You should also be familiar with cloud data integration services for structured, semi-structured, and unstructured data like Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling is required. Your profile should demonstrate the ability to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using Python is a must. Very good knowledge of cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud is essential. Understanding networking, security, design principles, and best practices in the cloud is expected. Knowledge of IoT and real-time streaming would be an added advantage. You will be leading architectural/technical discussions with clients and should possess excellent communication and presentation skills. At Capgemini, we recognize the significance of flexible work arrangements to provide support. Whether it's remote work or flexible work hours, you will get an environment to maintain a healthy work-life balance. Our mission is centered on your career growth, offering an array of career growth programs and diverse professions crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. With a responsible and diverse group of over 340,000 team members in more than 50 countries, Capgemini has a strong heritage of over 55 years. Clients trust Capgemini to unlock the value of technology to address the entire breadth of their business needs, delivering end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by market-leading capabilities in AI, Generative AI, cloud, and data, combined with deep industry expertise and a partner ecosystem.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As a hands-on and strategic Solution/Technical Architect, you will be responsible for leading the design and implementation of AWS-based integration solutions for large-scale enterprise environments. Your main focus will be on building scalable, secure, and high-performing architectures to connect ERP systems (JD Edwards), eCommerce platforms, PIM/DAM ecosystems, and external partners using AWS-native services, real-time APIs, and EDI workflows. You will collaborate closely with offshore and onshore teams to ensure the delivery of modern integration solutions with architectural rigor and excellence. Your key responsibilities will include designing and implementing integration architectures on AWS, including data pipelines, APIs, and event-driven workflows. You will take the lead in real-time and batch integrations across ERP (JDE), eCommerce, PIM/DAM, and external partner systems using AWS-native services such as API Gateway, Lambda, EventBridge, Step Functions, SQS/SNS, and S3 to build modular, cloud-native solutions. Additionally, you will define interface specifications, architecture diagrams, deployment models, and technical documentation to ensure that technical solutions meet performance, scalability, security, and compliance standards. You will also support EDI-style integrations using custom services or third-party tools hosted on AWS, aligning with established architecture patterns. The ideal candidate for this role should have at least 10 years of experience in solution architecture and enterprise integration, with a minimum of 3 years of hands-on experience specifically on AWS. Deep expertise with AWS-native integration services like Lambda, Step Functions, API Gateway, EventBridge, CloudWatch, S3, and IAM is required. Proven experience in integrating with JD Edwards (JDE) and commerce platforms, along with a solid understanding of event-driven architecture, serverless patterns, and API lifecycle management, will be essential. Strong knowledge of message transformation (XML, JSON, XSLT), asynchronous messaging, and security enforcement is also necessary, alongside excellent communication skills and the ability to document and present technical solutions clearly. In return, we offer you a lead architecture role on mission-critical AWS-based integration programs, exposure to complex enterprise modernization projects across retail and digital commerce, a clear career path towards Enterprise Architect or Platform Practice Lead roles, and an engineering-first culture that prioritizes quality, innovation, and platform excellence.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies