Jobs
Interviews

3652 Redshift Jobs - Page 40

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: Python: 3 years (Required) Data Engineering: 6 years (Required) Batch Technologies Hadoop, Hive, Athena, Presto, Spark: 4 years (Required) SQL / Queries: 3 years (Required) AWS Elastic MapReduce (EMR): 4 years (Required) ETL/ data pipeline implementations: 3 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function: 4 years (Required) Athena: 3 years (Required) AWS Glue Catalog: 2 years (Required) CI/CD: GitHub Actions: 2 years (Required) Work Location: In person

Posted 3 weeks ago

Apply

20.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Over the past 20 years Amazon has earned the trust of over 300 million customers worldwide by providing unprecedented convenience, selection and value on Amazon.com. By deploying Amazon Pay’s products and services, merchants make it easy for these millions of customers to safely purchase from their third party sites using the information already stored in their Amazon account. In this role, you will lead Data Engineering efforts to drive automation for Amazon Pay organization. You will be part of the data engineering team that will envision, build and deliver high-performance, and fault-tolerant data pipeliens. As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to translate raw data into actionable insights for stakeholders, empowering them to make data-driven decisions. Key job responsibilities Design, implement, and support a platform providing ad-hoc access to large data sets Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3011919

Posted 3 weeks ago

Apply

6.0 years

6 - 9 Lacs

Gurgaon

On-site

DESCRIPTION Want to build at the cutting edge of immersive shopping experiences? The Visual Innovation Team (VIT) is at the center of all advanced visual and immersive content at Amazon. We're pioneering VR and AR shopping, CGI, and GenAI. We are looking for a Design Tecnologist who will help drive innovation in this space who understands technical problems the team may face from an artistic perspective and provide creative technical solutions. This role is for if you want to be a part of: Partnering with world-class creatives and scientists to drive innovation in content creation Developing and expanding Amazon’s VIT Virtual Production workflow. Building one of the largest content libraries on the planet Driving the success and adoption of emerging experiences across Amazon Key job responsibilities We are looking for a Design Technologist with a specialty in workflow automation using novel technologies like Gen-AI and CV. You will prototype and deliver creative solutions to the technical problems related to Amazon visuals. The right person will bring an implicit understanding of the balance needed between design, technology and creative professionals — helping scale video content creation within Amazon by enabling our teams - to work smarter, not harder. Design Technologists in this role will: Act as a bridge between creative and engineering disciplines to solve multi-disciplinary problems Work directly with videographers and studio production to develop semi-automated production workflows Collaborate with other tech artists and engineers to build and maintain a centralized suite of creative workflows and tooling Work with creative leadership to research, prototype and implement the latest industry trends that expand our production capabilities and improve efficiency A day in the life As a Design Technologist a typical day will include but is not limited to coding and development of tools, workflows, and automation to improve the creative crew's experience and increase productivity. This position will be focused on in house video creation, with virtual production and Gen-Ai workflows. You'll collaborate with production teams, observing, empathizing, and prototyping novel solutions. The ideal candidate is observant, creative, curious, and empathetic, understanding that problems often have multiple approaches. BASIC QUALIFICATIONS 6+ years of front-end technologist, engineer, or UX prototyper experience Have coding samples in front end programming languages Have an available online portfolio Experience developing visually polished, engaging, and highly fluid UX prototypes Experience collaborating with UX, Product, and technical partners PREFERRED QUALIFICATIONS Knowledge of databases and AWS database services: ElasticSearch, Redshift, DynamoDB Experience with machine learning (ML) tools and methods Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, HR, Gurgaon Amazon Design

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

AWS Data Engineer WorkMode :Hybrid Work Location : Chennai / Hyderabad / Bangalore / Pune / mumbai / gurgaon Work Timing : 2 PM to 11 PM Primary : Data Engineer AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks JD Examining the business needs to determine the testing technique by automation testing. Maintenance of present regression suites and test scripts is an important responsibility of the tester. The testers must attend agile meetings for backlog refinement, sprint planning, and daily scrum meetings. Testers to execute regression suites for better results. Must provide results to developers, project managers, stakeholders, and manual testers. Responsibility AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency.

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Chandigarh, India

On-site

We are seeking a highly experienced and hands-on Fullstack Architect to lead the design and architecture of scalable, enterprise-grade software solutions. This role requires a deep understanding of both frontend and backend technologies, cloud infrastructure, and microservices, with the ability to guide teams through technical challenges and solution delivery. Key Responsibilities Architect, design, and oversee the development of full-stack applications using modern JS frameworks and cloud-native tools. Lead microservice architecture design, ensuring system scalability, reliability, and performance. Evaluate and implement AWS services (Lambda, ECS, Glue, Aurora, API Gateway, etc.) for backend solutions. Provide technical leadership to engineering teams across all layers (frontend, backend, database). Guide and review code, perform performance optimization, and define coding standards. Collaborate with DevOps and Data teams to integrate services (Redshift, OpenSearch, Batch). Translate business needs into technical solutions and communicate with cross-functional stakeholders. Required Skills Deep expertise in Node.js , TypeScript , React.js , Python , Redux , and Jest . Proven experience designing and deploying systems using Microservices architecture . Strong understanding of AWS services: API Gateway, ECS, Lambda, Aurora, Glue, SQS, OpenSearch, Batch. Hands-on with MySQL , Redshift , and writing optimized queries. Advanced knowledge of HTML, CSS, Bootstrap, JavaScript . Familiarity with tools: VS Code , DataGrip , Jira , GitHub , Postman . Strong knowledge of architectural design patterns and security best practices. Preferred Experience working in fast-paced product development or startup environments. Strong communication and mentoring skills. Education & Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 7–10 years of experience in full-stack development, with at least 3 years in an architectural or senior technical leadership role. How to Apply: Please send your updated CV to hiring@acmeminds.com , clearly mentioning the Job Code FSARCH-25 in the subject line of the email. e.g., [*Subject: Application for Job Code: FSARCH-25]

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: AWS Data Engineer Location: Pune, Jaipur, Bengaluru, Hyderabad, Noida Duration: Fulltime Positions: Multiple Responsibilities: • Defines, designs, develops and test software components/applications using AWS - (Databricks on AWS, AWS Glue, Amazon S3, AWS Lambda, Amazon Redshift, AWS Secrets Manager) • Strong SQL skills with experience. • Experience handling Structured and unstructured datasets. • Experience in Data Modeling and Advanced SQL techniques. • Experience implementing AWS Glue, Airflow, or any other data orchestration tool using latest technologies and techniques. • Good exposure in Application Development. • The candidate should work independently with minimal supervision. Must Have: • Hands-on experience with distributed computing framework like Databricks, Spark-Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL) • Willing to work with product teams to best optimize product features/functions. • Experience on Batch workloads and real-time streaming with high volume data frequency • Performance optimization on Spark workloads • Environment setup, user management, Authentication and cluster management on Databricks • Professional curiosity and the ability to enable yourself in new technologies and tasks. • Good understanding of SQL and a good grasp of relational and analytical database management theory and practice. Good To Have: • Hands-on experience with distributed computing framework like Databricks. • Experience with Databricks migration from On-premise to Cloud OR Cloud to Cloud • Migration of ETL workloads from Apache Spark implementations to Databricks • Experience on Databricks ML will be a plus • Migration from Spark 2.0 to Spark 3.5 Key Skills: • Python, SQL and PySpark • Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, HBase) • Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks • AWS (AWS Glue, Databricks on AWS, Lambda, Amazon Redshift, Amazon S3, AWS Secrets Manager) • Data Modelling, ETL Methodology

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Senior Data Engineering Lead (Databricks) Company Overview: At eClerx, we are a leading IT firm specializing in innovative technologies and solutions that drive business transformation. Leveraging expertise in business process management, advanced analytics, and smart automation, we empower our clients to achieve operational excellence and competitive advantage in fast-evolving markets. Role Overview: We are seeking a highly experienced Senior Data Engineering Lead with a strong focus on Databricks and cloud-based data engineering to lead our data engineering team. This leadership role requires a visionary who can design, develop, and manage scalable data infrastructure and pipelines, while mentoring and inspiring a team of data engineers. You will work closely with cross-functional teams including data scientists, analysts, and software engineers to enable robust data-driven decision-making and support business goals. Key Responsibilities: Lead and manage a team of data engineers, providing mentorship, technical guidance, and fostering a culture of collaboration and innovation. Architect, design, and oversee implementation of large-scale data pipelines, data lakes, and cloud-based data warehouses using Databricks, Apache Spark, and Snowflake. Develop and optimize ETL/ELT workflows ensuring performance, reliability, and scalability of data infrastructure. Collaborate with business stakeholders, data scientists, and software teams to understand requirements and translate them into scalable, efficient data solutions. Implement best practices for data quality, governance, security, and compliance. Drive continuous improvement of data engineering processes, standards, and tools across the organization. Support presales activities by contributing to RFPs, technical proposals, and client engagements. Stay abreast of emerging data technologies and trends, recommending innovative solutions to enhance analytics capabilities. Manage resource planning, project prioritization, and delivery timelines ensuring alignment with business objectives. Lead performance reviews, identify skill gaps, and champion professional development within the data engineering team. Facilitate cross-team communication to streamline data workflows and improve overall delivery. Qualifications & Skills: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related discipline. Minimum 15 years of professional experience in data engineering with at least 9 years in leadership or senior technical roles. Deep hands-on expertise with Databricks and Apache Spark for large-scale data processing. Strong programming skills in Python, Scala, or Java. Extensive experience with cloud data platforms such as AWS, Azure, or GCP, including services like S3, Redshift, BigQuery, Snowflake. Solid understanding of data modeling, data warehousing, ETL/ELT design, and data lakes. Experience with big data technologies like Hadoop, Kafka, and Databricks ecosystem. Knowledge of CI/CD pipelines, data orchestration tools (e.g., Apache Airflow), and data governance best practices. Proven experience managing high-performing teams and delivering complex data engineering projects on time. Familiarity with analytics solutions and the ability to translate business needs into technical requirements. Strong communication skills, capable of engaging with both technical teams and senior leadership. Experience supporting presales efforts and client technical discussions is a plus. Bonus: Exposure to machine learning lifecycle and model deployment on Databricks.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Remote

Skillset: PostgreSQL, Amazon Redshift, MongoDB, Apache Cassandra,AWS,ETL, Shell Scripting, Automation, Microsoft Azure We are looking for futuristic, motivated go getters having following skills for an exciting role. Job Description: Monitor and maintain the performance, reliability, and availability of multiple database systems. Optimize complex SQL queries, stored procedures, and ETL scripts for better performance and scalability. Troubleshoot and resolve issues related to database performance, integrity, backups, and replication. Design, implement, and manage scalable data pipelines across structured and unstructured sources. Develop automation scripts for routine maintenance tasks using Python, Bash, or similar tools. Perform regular database health checks, set up alerting mechanisms, and respond to incidents proactively. Analyze performance bottlenecks and resolve slow query issues and deadlocks. Work in DevOps/Agile environments, integrating with CI/CD pipelines for database operations. Collaborate with engineering, analytics, and infrastructure teams to integrate database solutions with applications and BI tools. Research and implement emerging technologies and best practices in database administration. Participate in capacity planning, security audits, and software upgrades for data infrastructure. Maintain comprehensive documentation related to database schemas, metadata, standards, and procedures. Ensure compliance with data privacy regulations and implement robust disaster recovery and backup strategies. Desired skills: Database Systems: Hands-on experience with SQL-based databases (PostgreSQL, MySQL), Amazon Redshift, MongoDB, and Apache Cassandra. Scripting & Automation: Proficiency in scripting using Python, Shell, or similar to automate database operations. Cloud Platforms: Working knowledge of AWS (RDS, Redshift, EC2, S3, IAM,Lambda) and Azure SQL/Azure Cosmos DB. Big Data & Distributed Systems: Familiarity with Apache Spark for distributed data processing. Performance Tuning: Deep experience in performance analysis, indexing strategies, and query optimization. Security & Compliance: Experience with database encryption, auditing, access control, and GDPR/PII policies. Familiarity with Linux and Windows server administration is a plus. Education & Experience: BE, B.Tech, MCA, Mtech from Tier 2/3 colleges & Science Graduates 5-8 years of work experience.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Kolkata, Hyderabad, Bengaluru

Work from Office

Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Masters Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title : Data Architect Location: Noida, India Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Nutrabay is seeking a highly analytical and detail-oriented Data Analyst to join our team. The ideal candidate will have a strong background in data analysis, visualization, and building robust analytics platforms to support business growth across product, marketing, and engineering teams. You should apply if you have: Proven experience in building analytics platforms from scratch, preferably in a product-based or e-commerce company. Strong hands-on experience in Power BI (DAX, Power Query, performance optimization). Advanced SQL skills, including complex query building and optimization. Expertise in ETL pipelines , data modeling, and data warehousing. Experience working with cloud data platforms like AWS Redshift and Google BigQuery. The ability to interpret and translate large datasets into actionable business insights. A collaborative mindset and excellent communication skills to present data-driven narratives. A strong sense of ownership and curiosity in solving business problems through data. You should not apply if you: Do not have practical experience with data visualization tools like Power BI. Are unfamiliar with writing and optimizing SQL queries. Have never worked on ETL or cloud-based data solutions (Redshift, BigQuery, etc.). Struggle with interpreting business problems into analytical solutions. Are uncomfortable in a high-ownership, fast-paced, product-led environment. Skills Required: Power BI (including DAX, Power Query, report optimization) SQL (data extraction, transformation, performance tuning) ETL Process Design and Data Warehousing AWS Redshift / Google BigQuery Python (Preferred, for automation and data wrangling) Data Governance & Security Business Intelligence & Analytical Storytelling Cross-functional stakeholder communication Data-driven decision support and impact measurement What will you do? Build and manage a centralized analytics platform from the ground up. Create insightful dashboards and reports using Power BI to drive strategic decisions. Design and implement ETL processes to integrate data from multiple sources. Ensure the accuracy, completeness, and reliability of all reporting systems. Collaborate with product managers, engineers, marketing, and leadership to define KPIs and data strategies. Conduct deep-dive investigations into customer behavior, performance metrics, and market trends. Develop internal tools and models to track business health and opportunity areas. Drive initiatives to enhance analytics adoption across teams. Own the data governance practices to ensure compliance and data security. Provide thought leadership and mentor junior analysts in the team. Work Experience: 2–4 years of experience in data analytics, business intelligence, or data engineering roles. Prior experience in a product-based company or high-growth e-commerce environment is strongly preferred. Working Days: Monday – Friday Location: Golf Course Road, Gurugram, Haryana (Work from Office) Perks: Opportunity to build a complete analytics infrastructure from scratch. Cross-functional exposure to tech, marketing, and product. Freedom to implement ideas and drive measurable impact. A collaborative work environment focused on growth and innovation. High learning and personal growth opportunity in a rapidly growing D2C company. Why Nutrabay: We believe in an open, intellectually honest culture where everyone is given the autonomy to contribute and do their life’s best work. As a part of the dynamic team at Nutrabay, you will have a chance to learn new things, solve new problems, build your competence and be a part of an innovative marketing-and-tech startup that’s revolutionising the health industry. Working with Nutrabay can be fun, and a place of a unique growth opportunity. Here you will learn how to maximise the potential of your available resources. You will get the opportunity to do work that helps you master a variety of transferable skills, or skills that are relevant across roles and departments. You will be feeling appreciated and valued for the work you delivered. We are creating a unique company culture that embodies respect and honesty that will create more loyal employees than a company that simply shells out cash. We trust our employees and their voice and ask for their opinions on important business issues. About Nutrabay: Nutrabay is the largest health & nutrition store in India. Our vision is to keep growing, having a sustainable business model and continue to be the market leader in this segment by launching many innovative products. We are proud to have served over 1 million customers uptill now and our family is constantly growing. We have built a complex and high converting eCommerce system and our monthly traffic has grown to a million. We are looking to build a visionary and agile team to help fuel our growth and contribute towards further advancing the continuously evolving product. Funding: We raised $5 Million in a Series A funding

Posted 3 weeks ago

Apply

4.0 years

18 - 22 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of the Weekday's clients Salary range: Rs 1800000 - Rs 2200000 (ie INR 18-22 LPA) Min Experience: 4 years Location: Bangalore, Bengaluru JobType: full-time We are seeking a skilled and detail-oriented Data Modeller with 4-6 years of experience to join our growing data engineering team. In this role, you will play a critical part in designing, implementing, and optimizing robust data models that support business intelligence, analytics, and operational data needs. You will collaborate with cross-functional teams to understand business requirements and convert them into scalable and efficient data solutions, primarily leveraging Amazon Redshift and Erwin Data Modeller. Requirements Key Responsibilities: Design and implement conceptual, logical, and physical data models that support business processes and reporting needs. Develop data models optimized for Amazon Redshift, ensuring performance, scalability, and integrity of data. Work closely with business analysts, data engineers, and stakeholders to translate business requirements into data structures. Use Erwin Data Modeller (Erwin ERP) to create and maintain data models and maintain metadata repositories. Collaborate with ETL developers to ensure efficient data ingestion and transformation pipelines that align with the data model. Apply normalization, denormalization, and indexing strategies to optimize data performance and access. Perform data profiling and source system analysis to validate assumptions and model accuracy. Create and maintain detailed documentation, including data dictionaries, entity relationship diagrams (ERDs), and data lineage information. Drive consistency and standardization across all data models, ensuring alignment with enterprise data architecture and governance policies. Identify opportunities to improve data quality, model efficiency, and pipeline performance. Required Skills and Qualifications: 4-6 years of hands-on experience in data modeling, including conceptual, logical, and physical modeling. Strong expertise in Amazon Redshift and Redshift-specific modeling best practices. Proficiency with Erwin Data Modeller (Erwin ERP) or similar modeling tools. Strong knowledge of SQL with experience writing complex queries and performance tuning. Solid understanding of ETL processes and experience working alongside ETL engineers to integrate data from multiple sources. Familiarity with dimensional modeling, data warehousing principles, and star/snowflake schemas. Experience with metadata management, data governance, and maintaining modeling standards. Ability to work independently and collaboratively in a fast-paced, data-driven environment. Strong analytical and communication skills with the ability to present technical concepts to non-technical stakeholders. Preferred Qualifications: Experience working in a cloud-native data environment (AWS preferred). Exposure to other data modeling tools and cloud data warehouses is a plus. Familiarity with data catalog tools, data lineage tracing, and data quality frameworks

Posted 3 weeks ago

Apply

4.0 - 6.0 years

12 - 18 Lacs

Chennai, Bengaluru

Work from Office

Key Skills : Python, SQL, PySpark, Databricks, AWS, Data Pipeline, Data Integration, Airflow, Delta Lake, Redshift, S3, Data Security, Cloud Platforms, Life Sciences. Roles & Responsibilities : Develop and maintain robust, scalable data pipelines for ingesting, transforming, and optimizing large datasets from diverse sources. Integrate multi-source data into performant, query-optimized formats such as Delta Lake, Redshift, and S3. Tune data processing jobs and storage layers to ensure cost efficiency and high throughput. Automate data workflows using orchestration tools like Airflow and Databricks APIs for ingestion, transformation, and reporting. Implement data validation and quality checks to ensure reliable and accurate data. Manage and optimize AWS and Databricks infrastructure to support scalable data operations. Lead cloud platform migrations and upgrades, transitioning legacy systems to modern, cloud-native solutions. Enforce security best practices, ensuring compliance with regulatory standards such as IAM and data encryption. Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders to deliver data solutions. Experience Requirement : 4-6 years of hands-on experience in data engineering with expertise in Python, SQL, PySpark, Databricks, and AWS. Strong background in designing and building data pipelines, and optimizing data storage and processing. Proficiency in using cloud services such as AWS (S3, Redshift, Lambda) for building scalable data solutions. Hands-on experience with containerized environments and orchestration tools like Airflow for automating data workflows. Expertise in data migration strategies and transitioning legacy data systems to modern cloud platforms. Experience with performance tuning, cost optimization, and lifecycle management of cloud data solutions. Familiarity with regulatory compliance (GDPR, HIPAA) and security practices (IAM, encryption). Experience in the Life Sciences or Pharma domain is highly preferred, with an understanding of industry-specific data requirements. Strong problem-solving abilities with a focus on delivering high-quality data solutions that meet business needs. Education : Any Graduation.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Greeting from Infosys BPM Ltd., We are hiring for Test Automation using Java and Selenium, with knowledge on testing process, SQL, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 14th & 15th July 2025 at Chennai location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-217871 Interview details Interview Date: 14th & 15th July 2025 Interview Time: 10 AM till 1 PM Interview Venue:TP 1/1, Central Avenue Techno Park, SEZ, Mahindra World City, Paranur, TamilNadu Please find below Job Description for your reference: Work from Office*** Rotational Shifts Min 2 years of experience on project is mandate*** Job Description: Test Automation using Java and Selenium, with knowledge on testing process, SQL Java, Selenium automation, SQL, Testing concepts, Agile. Tools: Jira and ALM, Intellij Functional Testing: UI Test Automation using Selenium, Java Financial domain experience Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Greeting from Infosys BPM Ltd, Exclusive Women's Walkin drive We are hiring for Walkme, ETL Testing + Python Programming, Automation Testing with Java, Selenium, BDD, Cucumber, Test Automation using Java and Selenium, with knowledge on testing process, SQL, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 16th July 2025 at Pune location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-217822 Interview details Interview Date: 16th July 2025 Interview Time: 10 AM till 1 PM Interview Venue: Pune:: Hinjewadi Phase 1 Infosys BPM Limited, Plot No. 1, Building B1, Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Hinjewadi Phase 1, Pune, Maharashtra-411057 Please find below Job Description for your reference: Work from Office*** Min 2 years of experience on project is mandate*** Job Description: Walkme Design, develop, and deploy WalkMe solutions to enhance user experience and drive digital adoption. Experience in task-based documentation, training and content strategy Experience working in a multi-disciplined team with geographically distributed co-workers Working knowledge technologies such as CSS and JavaScript Project management and/or Jira experience Experience in developing in-app guidance using tools such as WalkMe, Strong experience in technical writing, instructional video or guided learning experience in a software company Job Description: ETL Testing + Python Programming Experience in Data Migration Testing (ETL Testing), Manual & Automation with Python Programming. Strong on writing complex SQLs for data migration validations. Work experience with Agile Scrum Methodology Functional Testing- UI Test Automation using Selenium, Java Financial domain experience Good to have AWS knowledge Job Description: Automation Testing with Java, Selenium, BDD, Cucumber Hands on exp in Automation. Java, Selenium, BDD , Cucumber expertise is mandatory. Banking Domian Experience is good. Financial domain experience Automation Talent with TOSCA skills, Payment domain skills is preferable. Job Description: Test Automation using Java and Selenium, with knowledge on testing process, SQL Java, Selenium automation, SQL, Testing concepts, Agile. Tools: Jira and ALM, Intellij Functional Testing: UI Test Automation using Selenium, Java Financial domain experience Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Greeting from Infosys BPM Ltd., We are hiring for Test Automation using Java and Selenium, with knowledge on testing process, SQL, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 16th July 2025 at Chennai location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-217871 Interview details Interview Date: 16th July 2025 Interview Time: 10 AM till 1 PM Interview Venue:TP 1/1, Central Avenue Techno Park, SEZ, Mahindra World City, Paranur, TamilNadu Please find below Job Description for your reference: Work from Office*** Rotational Shifts Min 2 years of experience on project is mandate*** Job Description: Test Automation using Java and Selenium, with knowledge on testing process, SQL Java, Selenium automation, SQL, Testing concepts, Agile. Tools: Jira and ALM, Intellij Functional Testing: UI Test Automation using Selenium, Java Financial domain experience Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Analysis Associate Advisor Sr. BI Analyst and Visualization Expert at Evernorth Health Services, a division of The Cigna Group, you will play a crucial role in creating pharmacy, care, and benefits solutions to improve health and increase vitality for millions of people. Your primary responsibility will be to design, develop, and maintain BI reports using Cognos and other BI tools. You will ensure that BI solutions are optimized for performance and scalability, and develop Materialized Views to support complex data aggregation and reporting requirements. Your role will involve conducting in-depth data analysis to generate business insights and support strategic decision-making. You will identify trends, patterns, and anomalies in data and provide actionable recommendations. Collaborating closely with business stakeholders, you will gather requirements and translate them into BI solutions. Additionally, you will provide training and support to end-users on BI tools and reporting capabilities. Ensuring data accuracy and integrity in all BI outputs will be a key focus area for you. You will participate in data quality and governance initiatives to maintain reliable data sources. Staying updated with the latest BI technologies and trends, you will continuously improve BI processes and methodologies. Working with distributed requirements and technical stakeholders, you will complete shared design and development tasks. To excel in this role, you must have extensive experience with BI tools, particularly Cognos, and proficiency in SQL and other data querying languages. Strong data visualization skills and experience with tools like Cognos, Tableau, or Power BI are preferred. You should also have experience creating and managing Materialized Views in data warehouse and data lake environments, a solid understanding of OOP, Design Patterns, and JSON Data Structures, as well as familiarity with AWS, Redshift, and CI/CD practices. With a minimum of 8 years of experience and a college degree in related technical/business areas, you will be recognized internally as the go-to person for the most complex software engineering assignments. Your proven experience with architecture, design, and development of large-scale enterprise application solutions, along with industry certifications in BI or data analysis, will be valuable assets in this role. Evernorth is an Equal Opportunity Employer that actively encourages and supports diversity, equity, and inclusion efforts across the organization. Join us in our mission to make the prediction, prevention, and treatment of illness and disease more accessible to diverse client populations.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for developing and modifying programs using Python, AWS Glue/Redshift, and PySpark technologies. Your role will involve writing effective and scalable code, as well as identifying areas for program modifications. Additionally, you must have a strong understanding of AWS cloud technologies such as CloudWatch, Lambda, Dynamo, API Gateway, and S3. Experience in creating APIs from scratch and integrating with 3rd party APIs is also required. This is a full-time position based in Hyderabad/Chennai/Bangalore, and the ideal candidate should have a maximum notice period of 15 days.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have advanced proficiency in Python, with a solid understanding of inheritance and classes. Additionally, the candidate should be well-versed in EMR, Athena, Redshift, AWS Glue, IAM roles, CloudFormation (CFT is optional), Apache Airflow, Git, SQL, Py-Spark, Open Metadata, and Data Lakehouse. Experience with metadata management is highly desirable, particularly with AWS Services such as S3. The candidate should possess the following key skills: - Creation of ETL Pipelines - Deploying code in EMR - Querying in Athena - Creating Airflow Dags for scheduling ETL pipelines - Knowledge of AWS Lambda and ability to create Lambda functions This role is for an individual contributor, and as such, the candidate is expected to autonomously manage client communication and proactively resolve technical issues without external assistance.,

Posted 3 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for a skilled R Shiny programmer to create interactive reports that transform clinical trial data into actionable clinical insights. As an R Shiny programmer, your role will involve designing, developing, deploying, and optimizing user-friendly web applications for analyzing and visualizing clinical data. Your responsibilities will include designing, developing, testing, and deploying interactive R Shiny web applications. You will collaborate with data scientists, bioinformatics programmers, analysts, and stakeholders to understand application requirements and translate them into intuitive R Shiny applications. Additionally, you will be responsible for translating complex data analysis and visualization tasks into clear and user-friendly interfaces, writing clean and efficient R code, conducting code reviews, and validating R programming. Moreover, you will integrate R Shiny applications with AWS services like AWS Redshift, implement unit tests to ensure quality and performance, benchmark and optimize application performance, and address any inconsistencies in data, analytical, or reporting problems that may arise. Other duties may be assigned as needed. The ideal candidate should possess a Bachelor's degree in computer science, Data Science, or a related field, along with 3 to 8 years of relevant experience. Proven expertise in building R Shiny applications, strong proficiency in R programming, including data manipulation, statistical analysis, and data visualization, experience in using SQL, and an understanding of user interface (UI) and user experience (UX) principles are essential. Experience with gathering requirements, using RStudio, Version Control software, managing programming code, and working with POSIT Workbench, Connect, and/or Package Manager is preferred. Candidates should have the ability to manage multiple tasks, work independently and in a team environment, effectively communicate technical concepts in written and oral formats, and experience with R markdown, continuous integration/continuous delivery (CI/CD) pipelines, and AWS cloud computing services such as Redshift, EC2, S3, and CloudWatch. The required education for this position is a BE/MTech/MCA degree in a computer-related field. A satisfactory background check is mandatory for this role.,

Posted 3 weeks ago

Apply

6.0 - 12.0 years

0 Lacs

karnataka

On-site

Your role as a Supervisor at Koch Global Services India (KGSI) will involve being part of a global team dedicated to creating new solutions and enhancing existing ones for Koch Industries. With over 120,000 employees worldwide, Koch Industries is a privately held organization engaged in manufacturing, trading, and investments. KGSI is being established in India to expand its IT operations and serve as an innovation hub within the IT function. This position offers the chance to join at the inception of KGSI and play a pivotal role in its development over the coming years. You will collaborate closely with international colleagues, providing valuable global exposure to the team. In this role, you will lead a team responsible for developing innovative solutions for KGS and its customers. You will oversee the performance and growth of data engineers at KGSI, ensuring the delivery of application solutions. Collaboration with global counterparts will be essential for enterprise-wide delivery success. Your responsibilities will include mentoring team members, providing feedback, and coaching them for their professional growth. Additionally, you will focus on understanding individual career aspirations, addressing challenges, and facilitating relevant training opportunities. Ensuring compensation aligns with Koch's philosophy and maintaining effective communication with HR will be key aspects of your role. Timely delivery of projects is crucial, and you will be responsible for identifying and addressing delays proactively. By fostering knowledge sharing and best practices within the team, you will contribute to the overall success of KGSI. Staying updated on market trends, talent acquisition, and talent retention strategies will be vital for your role. Your ability to lead by example, communicate effectively, and solve problems collaboratively will be essential in driving team success. To qualify for this role, you should hold a Bachelor's or Master's degree in computer science or information technology with a minimum of 12 years of IT experience, including leadership roles in integration teams. A solid background in data engineering, AWS cloud migration, and team management is required. Strong communication skills, customer focus, and a proactive mindset towards innovation are essential for success in this position. Experience with AWS Lambda, Glue, ETL projects, Python, SQL, and BI tools will be advantageous. Familiarity with manufacturing business processes and exposure to Scrum Master practices would be considered a plus. Join Koch Global Services (KGS) to be part of a dynamic team that creates solutions to support various business functions worldwide. With a global presence in India, Mexico, Poland, and the United States, KGS empowers employees to make a significant impact on a global scale.,

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Data Engineer (Aerospace/Aviation Background) Key Responsibilities • Lead and manage end-to-end data engineering projects, collaborating with cross-functional teams including analytics, product, and engineering. • Design and maintain scalable ETL/ELT pipelines using Redshift, SQL, and AWS services (e.g., S3, Glue, Lambda). • Optimize Redshift clusters and SQL queries for performance and cost-efficiency. • Serve as the domain expert for data modeling, architecture, and warehousing best practices. • Proactively identify and resolve bottlenecks and data quality issues. • Mentor junior engineers and enforce coding and architectural standards. • Own the data lifecycle: from ingestion and transformation to validation and delivery for reporting. Qualifications • 5+ years of experience in data engineering or a related field. • Proven expertise in AWS Redshift, advanced SQL, and modern data pipeline tools. • Hands-on experience with data lakes, data warehousing, and distributed systems. • Strong understanding of data governance, security, and performance tuning. • Demonstrated ability to lead projects independently and drive them to completion. • Excellent problem-solving, communication, and stakeholder management skills.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Job Overview The AWS Architect with Data Engineering Skills will be responsible for designing, implementing, and managing scalable, robust, and secure cloud infrastructure and data solutions on AWS. This role requires a deep understanding of AWS services, data engineering best practices, and the ability to translate business requirements into effective technical solutions. Key Responsibilities Architecture Design: Design and architect scalable, reliable, and secure AWS cloud infrastructure. Develop and maintain architecture diagrams, documentation, and standards. Data Engineering Design and implement ETL pipelines using AWS services such as Glue, Lambda, and Step Functions. Build and manage data lakes and data warehouses using AWS services like S3, Redshift, and Athena. Ensure data quality, data governance, and data security across all data platforms. AWS Services Management Utilize a wide range of AWS services (EC2, S3, RDS, Lambda, DynamoDB, etc.) to support various workloads and applications. Implement and manage CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy. Monitor and optimize the performance, cost, and security of AWS resources. Collaboration And Communication Work closely with cross-functional teams including software developers, data scientists, and business stakeholders. Provide technical guidance and mentorship to team members on best practices in AWS and data engineering. Security And Compliance Ensure that all cloud solutions follow security best practices and comply with industry standards and regulations. Implement and manage IAM policies, roles, and access controls. Innovation And Improvement Stay up to date with the latest AWS services, features, and best practices. Continuously evaluate and improve existing systems, processes, and architectures. (ref:hirist.tech),

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Software Engineer – Integration (Cloud) Skills To be successful in this role as a Cloud focused Integration “Software Engineer – OSS Platform Engineering", you should possess the following skillsets: Deep Expertise in Cloud platforms (AWS, Azure or GCP) infrastructure design and cost optimization. An expert in containerization and Orchestration using dockers and Kubernetes (deployments, service mesh etc.) Hands-on expertise with platform engineering and productization (for other app consumption as tenants) of opensource monitoring/logging tools (Prometheus, Grafana, ELK and similar) and cloud-native tools based. Strong Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways etc) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Some Other Highly Valued Skills Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, Snowflake etc). Solid understanding of DevOps tooling, GitOps, CI/CD, config management, Jenkins, build pipelines and source control systems. Working knowledge of cloud infrastructure services: compute, storage, networking, hybrid connectivity, monitoring/logging, security and IAM. SRE Experience. Expertise building and defining KPI’s (SLI/SLO’s) using open-source tooling like ELK, Prometheus and various other instrumentation, telemetry, and log analytics. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Pune office.

Posted 3 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Information Company Yubi Date Opened 07/10/2025 Job Type Full time Work Experience 1-3 years Industry Technology City Bangalore State/Province Karnataka Country India Zip/Postal Code 560076 About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. Job Description Data Engineer 2 Position Summary: As a Data Engineer, you will be part of a highly talented Data Engineering team. Responsible for developing reusable capabilities and tools to automate various types of data processing pipelines. You will be contributing to different stages of data engineering like data acquisition, ingestion, processing, monitoring pipelines and validating data. Your contribution will be really crucial in keeping various data ingestion and processing pipelines running successfully. Along with ensuring the data points available in the data lake are up to date, valid and usable. Technology Experience: 3+ years of experience in data engineering. Comfortable and hands on with the Python programming. Strong experience in working with RDBMS and NoSQL systems. Strong experience in working on AWS ecosystem with hands-on experience in working with different AWS components like Airflow, EMR , Redshift, S3, Athena, PySpark etc. Strong experience in developing REST APIs with Python using frameworks like flask, fastapi. Prior experience in working with crawling libraries like BeautifulSoup in Python would be desirable. Proven ability to work with SQL queries, including writing complex queries to retrieve key metrics. Skilled in connecting to, exploring, and understanding upstream data. Experience working with various data lake storage format types and ability to choose it based on the use cases. Responsibilities: Design and build scalable data pipelines that can handle large volumes of data. Develop ETL/ELT pipelines and extract the data from any upstream sources and sync with the data lakes with the format of parquet, iceberg, delta formats. Optimize and ensure the data pipelines are running successfully and ensure the business continuity. Collaborate with cross functional teams and source all the data required for the business use cases. Stay up-to-date with emerging data technologies and trends to ensure the continuous improvement of our data infrastructure and architecture Follow best practices in data querying and manipulation to ensure data integrity.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies