Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Fullstack Developer (Java + Angular) Qualification: 5+ years of experience with a minimum bachelor’s degree in Computer Science. Notice Period: Immediate Technical Skillset: Languages & Frameworks: Java 8+, JavaScript, TypeScript Spring Boot, Spring MVC, Spring Web Service, Spring Data, Hibernate Angular 8+, React 16+ Frontend Technologies: Angular Material, Bootstrap 4, HTML5, CSS3, SCSS Database & Reporting: Oracle SQL, PL/SQL development JasperReports ETL & Scripting: Pentaho Kettle Basic Linux scripting and troubleshooting Version Control: GIT Software Design: Design patterns Show more Show less
Posted 4 days ago
10.0 - 12.0 years
8 - 10 Lacs
Hyderābād
On-site
We are the leading provider of professional services to the middle market globally, our purpose is to instill confidence in a world of change, empowering our clients and people to realize their full potential. Our exceptional people are the key to our unrivaled, inclusive culture and talent experience and our ability to be compelling to our clients. You’ll find an environment that inspires and empowers you to thrive both personally and professionally. There’s no one like you and that’s why there’s nowhere like RSM. Bachelor's degree in computer science, Information Technology, or related field. Familiarity with cloud databases and full-stack technologies (such as .NET, C#). 10-12 years of extensive experience in SQL Server DBA, data modeling, management and integration services Data migration, complex SPROCs and performant database functions,ETL (Extract Transform Load), and integration services, including SSIS. Data analysis, data quality, and proficient data modeling. Test automation tools and techniques for SQL databases. Demonstrated experience in applying DevSecOps best practices within an Agile software development environment Collaborate effectively with cross-functional teams in an Agile environment, utilizing tools like Jira, Confluence, and Gliffy for documentation and workflows. Design logical and physical data models to support application and reporting needs. Collaborate with business analysts and developers to understand data requirements. Create and maintain conceptual, logical, and physical data models. Evaluate and recommend improvements to data architecture and flows. Monitor database performance and implement tuning measures for optimization. Expertise in database platforms such as: Microsoft SQL Server /Oracle DB/PostgreSQL / MySQL/cloudDBs At RSM, we offer a competitive benefits and compensation package for all our people. We offer flexibility in your schedule, empowering you to balance life’s demands, while also maintaining your ability to serve clients. Learn more about our total rewards at https://rsmus.com/careers/india.html . RSM does not tolerate discrimination and/or harassment based on race; colour; creed; sincerely held religious beliefs, practices or observances; sex (including pregnancy or disabilities related to nursing); gender (including gender identity and/or gender expression); sexual orientation; HIV Status; national origin; ancestry; familial or marital status; age; physical or mental disability; citizenship; political affiliation; medical condition (including family and medical leave); domestic violence victim status; past, current or prospective service in the Indian Armed Forces; Indian Armed Forces Veterans, and Indian Armed Forces Personnel status ; pre-disposing genetic characteristics or any other characteristic protected under applicable provincial employment legislation. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process and/or employment/partnership. RSM is committed to providing equal opportunity and reasonable accommodation for people with disabilities. If you require a reasonable accommodation to complete an application, interview, or otherwise participate in the recruiting process, please send us an email at careers@rsmus.com .
Posted 4 days ago
40.0 years
0 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-213468 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking an MDM Associate Data Engineerwith 2–5 years of experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong data engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark, and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience: Master’s degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelor’s degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced SQL expertise and data wrangling. Strong experience in Python and PySpark for data transformation workflows. Strong experience with Databricks and AWS architecture. Must have knowledge of MDM, data governance, stewardship, and profiling practices. In addition to above, candidates having experience with Informatica or Reltio MDM platforms will be preferred. Good-to-Have Skills: Experience with IDQ, data modeling and approval workflow/DCR. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Strong grip on data engineering concepts. Professional Certifications: Any ETL certification (e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 4 days ago
2.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it’s no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. Summary As an Analytics Engineer at MongoDB, you will play a critical role in leveraging data to drive informed decision-making and simplify end user engagement across our most critical data sets. You will be responsible for designing, developing, and maintaining robust analytics solutions, ensuring data integrity, and enabling data-driven insights across all of MongoDB. This role requires an analytical thinker with strong technical expertise to contribute to the growth and success of the entire business. We are looking to speak to candidates who are based in Gurugram for our hybrid working model. Responsibilities Design, implement, and maintain highly performant data post-processing pipelines Create shared data assets that will act as the company’s source-of-truth for critical business metrics Partner with analytics stakeholders to curate analysis-ready datasets and augment the generation of actionable insights Partner with data engineering to expose governed datasets to the rest of the organization Make impactful contributions to our analytics infrastructure, systems, and tools Create and manage documentation, and conduct knowledge sharing sessions to proliferate tribal knowledge and best practices Maintain consistent planning and tracking of work in JIRA tickets Skills & Attributes Bachelor’s degree (or equivalent) in mathematics, computer science, information technology, engineering, or related discipline 2-4 years of relevant experience Strong Proficiency in SQL and experience working with relational databases Solid understanding of data modeling and ETL processes Proficiency in Python for data manipulation and analysis Familiarity with CI/CD concepts and experience with managing codebases with git Experience managing ETL and data pipeline orchestration with dbt and Airflow Familiarity with basic command line functions Experience translating project requirements into a set of technical sub-tasks that build towards a final deliverable Committed to continuous improvement, with a passion for building processes/tools to make everyone more efficient The ability to effectively collaborate cross-functionally to drive actionable and measurable results A passion for AI as an enhancing tool to improve workflows, increase productivity, and generate smarter outcomes Strong communication skills to document technical processes clearly and lead knowledge-sharing efforts across teams A desire to constantly learn and improve themselves To drive the personal growth and business impact of our employees, we’re committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it’s like to work at MongoDB, and help us make an impact on the world! MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter. MongoDB is an equal opportunities employer. Req ID - 2263168254 Show more Show less
Posted 4 days ago
0 years
4 - 7 Lacs
Hyderābād
On-site
Req ID: 327063 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a python,pySpark,ApacheSpark to join our team in Hyderabad, Telangana (IN-TG), India (IN). "At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company's growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here "NTT DATA Services currently seeks Python Developer to join our team in Hyderabad Design and build ETL solutions with experience in data engineering, data modelling in large-scale in both batch and real-time environments. Skills required: Python, PySpark, Apache Spark, Unix Shell Scripting, GCP, Big query, MongoDB, Kafka event streaming, API development, CI/CD. For software engineering 3: 6+yrs Mandate :Apache spark with python, pyspark, GCP with big query, database Secondary mandate: Abinitio ETL Good to have : Unix shell scripting & Kafka event streaming" About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 4 days ago
3.0 - 5.0 years
2 - 7 Lacs
Hyderābād
On-site
Country/Region: IN Requisition ID: 26436 Work Model: Position Type: Salary Range: Location: INDIA - HYDERABAD - BIRLASOFT OFFICE Title: Technical Lead-Data Engg Description: Area(s) of responsibility We are seeking a skilled Informatica ETL Developer with 3–5 years of experience in ETL and Business Intelligence projects. The ideal candidate will have a strong background in Informatica PowerCenter , a solid understanding of data warehousing concepts , and hands-on experience in SQL, performance tuning , and production support . This role involves designing and maintaining robust ETL pipelines to support digital transformation initiatives for clients in manufacturing, automotive, transportation, and engineering domains. Key Responsibilities: Design, develop, and maintain ETL workflows using Informatica PowerCenter . Troubleshoot and optimize ETL jobs for performance and reliability. Analyze complex data sets and write advanced SQL queries for data validation and transformation. Collaborate with data architects and business analysts to implement data warehousing solutions . Apply SDLC methodologies throughout the ETL development lifecycle. Support production environments by identifying and resolving data and
Posted 4 days ago
2.0 years
5 - 8 Lacs
Hyderābād
On-site
At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Python with SQL Position: Software Engineer Experience: 2-3Years Category: Software Development/ Engineering Location: Hyderabad , Chennai , Bangalore Employment Type: Full Time Your future duties and responsibilities We are seeking a highly skilled and detail-oriented Python and SQL Developer to join our team. The ideal candidate will be responsible for developing and maintaining data-driven applications, building efficient and scalable solutions, and working with databases to extract and manipulate data for analysis and reporting. Design, develop, and maintain scalable Python applications and microservices. Write complex and optimized SQL queries for data extraction, transformation, and loading (ETL). Develop and automate data pipelines integrating various data sources (REST APIs, files, databases). Work with large datasets in relational databases such as PostgreSQL, MySQL, or SQL Server. Collaborate with data engineers, analysts, and product teams to build high-quality data solutions. Implement unit testing, logging, and error handling to ensure software reliability. Optimize database performance and troubleshoot query issues. Participate in architecture discussions and code reviews 2+ years of professional experience with Python and SQL in production environments. Deep understanding of Python core concepts including data structures, OOP, exception handling, and multi-threading. Experience with SQL query optimization, stored procedures, indexing, and partitioning. Strong experience with Python libraries such as Pandas, NumPy, SQLAlchemy, PySpark, or similar. Familiarity with ETL pipelines, data validation, and data integration. Experience with Git, CI/CD tools, and development best practices. Excellent problem-solving skills and ability to debug complex systems. Required qualifications to be successful in this role Experience with cloud platforms (AWS RDS, GCP BigQuery, Azure SQL, etc.). Exposure to Docker, Kubernetes, or serverless architectures. Understanding of data warehousing and business intelligence concepts. Prior experience working in Agile/Scrum environments. Years of experience : 2+ Relevant experience : 2+ Locations : Hyderabad ,Bangalore , Chennai. Eductaion : BTech ,MTech ,BSC Notice : Immediate to 30days - Serving Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 4 days ago
0 years
3 - 7 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant-Power BI Developer! Responsibilities: • Working within a team to identify, design and implement a reporting/dashboarding user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices • Gathering query data from tables of industry cognitive model/data lake and build data models with BI tools • Apply requisite business logic using data transformation and DAX • Understanding on Power BI Data Modelling and various in-built functions • Knowledge on reporting sharing through Workspace/APP, Access management, Dataset Scheduling and Enterprise Gateway • Understanding of static and dynamic row level security • Ability to create wireframes based on user stories and Business requirement • Basic Understanding on ETL and Data Warehousing concepts • Conceptualizing and developing industry specific insights in forms dashboards/reports/analytical web application to deliver of Pilots/Solutions following best practices Qualifications we seek in you! Minimum Qualifications Graduate Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 13, 2025, 5:48:11 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 4 days ago
6.0 years
0 Lacs
India
On-site
JOB DESCRIPTION Key Responsibilities: Prior experience in migrating from IBM DataStage to DBT and BigQuery or similar data migration activities into the cloud solutions. Design and implement modular, testable, and scalable DBT models aligned with business logic and performance needs. Optimize and manage BigQuery datasets, partitioning, clustering, and cost-efficient querying. Collaborate with stakeholders to understand existing pipelines and translate them into modern ELT workflows. Establish best practices for version control, CI/CD, testing, and documentation in DBT. Provide technical leadership and mentorship to team members during the migration process. Ensure high standards of data quality, governance, and security. Required Qualifications: 6+ years of experience in data engineering, with at least 3+ years hands-on with DBT and BigQuery. Strong understanding of SQL, data warehousing, and ELT architecture. Experience with data modeling (especially dimensional modeling) and performance tuning in BigQuery. Familiarity with legacy ETL tools like IBM DataStage and ability to reverse-engineer existing pipelines. Proficiency in Git, CI/CD pipelines, and dataOps practices. Excellent communication skills and ability to work independently and collaboratively. Preferred Qualifications: Experience in cloud migration projects (especially GCP). Knowledge of data governance, access control, and cost optimization in BigQuery. Exposure to orchestration tools like Airflow. Familiarity with Agile methodologies and cross-functional team collaboration. Show more Show less
Posted 4 days ago
3.0 years
16 - 20 Lacs
Hyderābād
On-site
Experience level- 3+ years Location- Hyderabad, Bangalore, Gurgaon AWS Cloud Services: Proficient in configuring and managing AWS S3 (Simple Storage Service), Lambda (serverless compute), VPC (Virtual Private Cloud), Route 53 (DNS management), Load Balancing, and CloudWatch (monitoring and logging).. Experience working with DynamoDB (NoSQL database) and a preference for candidates familiar with AWS ETL (Extract, Transform, Load) services for data processing and migration. DevOps Practices: Designs and implements Continuous Integration and Continuous Deployment (CI/CD) pipelines using Bamboo, with a preference for Infrastructure as Code (IaC) methodologies to automate application deployment and infrastructure provisioning Manages and configures network components, including F5 load balancers and Akamai for content delivery and security. Oversees the management and renewal of digital certificates using Venafi to ensure secure communication and compliance. Job Type: Full-time Pay: ₹1,600,000.00 - ₹2,000,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Application Question(s): Are you serving the notice period/ an immediate joiner? Experience: AWS Cloud services: 3 years (Preferred) DevOps: 3 years (Required) DynamoDB: 3 years (Preferred) Work Location: In person
Posted 4 days ago
15.0 years
0 Lacs
Hyderābād
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Experience with data pipeline development and management. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and best practices. - Knowledge of cloud data storage solutions and architectures. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Hyderabad. - A 15 years full time education is required. 15 years full time education
Posted 4 days ago
2.0 - 3.0 years
0 - 0 Lacs
Cochin
On-site
About Us: G3 Interactive is a Kochi-based software development company now expanding into AI, data engineering, and advanced analytics due to growing customer demand. We’re seeking a Data Engineer / Data Scientist with a strong foundation in data processing and analytics — and experience with Databricks is a significant advantage. What You’ll Do: Build and manage scalable data pipelines and ELT workflows using modern tools. Design and implement predictive models using Python (pandas, scikit-learn) . Leverage platforms like Databricks for distributed processing and ML workflows. Collaborate with internal and client teams to deliver data-driven solutions . Create insightful dashboards using Metabase or other BI tools. Maintain high-quality standards in data management and governance. Must-Have Skills: 2–3 years’ experience in Data Science, Data Engineering, or a similar role. Strong Python skills with expertise in pandas and data structures. Experience writing efficient SQL queries. Understanding of ML models, statistical analysis, and data wrangling. Excellent communication and collaboration skills. Bonus / Advantage: Experience with Databricks (ETL, notebooks, ML pipelines, Delta Lake, etc.). Familiarity with Spark, Airflow, AWS, or Google Cloud. Knowledge of CI/CD for data workflows. Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹35,000.00 per month Schedule: Day shift Education: Master's (Preferred) Experience: data engineering: 2 years (Preferred) Work Location: In person
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Join the Ford HR Management Security & Controls department! Our team is dedicated to ensuring robust and secure user access management for Human Resource (HR) applications in our global environment. We are responsible for the tools and processes that allow HR staff, Ford employees, and suppliers to request and authorize access efficiently and securely. We also maintain critical interfaces that connect our access management systems to various downstream applications. A key focus area for our team is the configuration and management of security roles within our global HR system, Oracle HCM. Oracle HCM (Human Capital Management) is Ford's comprehensive global HR platform. This includes Core HR processes (like employee data management, promotions, and internal transfers), as well as Compensation, Learning & Development, Talent Management, Recruiting and Payroll. We are looking for a skilled and experienced IT Analyst/Specialist with deep knowledge of Oracle HCM, particularly its security and access management capabilities. This role is critical to ensuring the integrity and security of our HR data and systems. You will also leverage your skills in SQL and Informatica PowerCenter to support data analysis, reporting, and ETL processes vital to our operations. You'll be joining a dynamic, globally distributed IT team with members located in the US, India, and Germany, collaborating across regions to achieve our shared goals. Responsibilities Configure, manage, and maintain security roles, profiles, and permissions within the global Oracle HCM system, ensuring compliance with security policies. Design, develop, and maintain Extract, Transform, Load (ETL) processes using Informatica PowerCenter to move and integrate data from various sources. Utilize SQL for data extraction, analysis and validation. Collaborate closely with HR functional teams and other IT teams to understand security and data requirements. Ensure implemented solutions adhere to security best practices and internal controls. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience. 3+ years experience with Oracle HCM, with a strong focus on security configuration and user access management. 3+ years experience with SQL for data querying, analysis, and manipulation. Hands-on experience designing, developing, and maintaining ETL processes (e.g. by using Informatica IICS). Understanding of data security principles and best practices, especially in an HR context. Experience troubleshooting complex technical issues related to access, security, or data integration. Strong analytical and problem-solving skills. Excellent communication and collaboration skills, comfortable working with global teams across different time zones. Desired Skills: Experience with other Oracle HCM Security Module Experience with other Oracle technologies or modules within HCM (e.g., Oracle BI Publisher). Experience working in a large, global enterprise environment. Show more Show less
Posted 4 days ago
2.0 - 4.0 years
5 - 6 Lacs
Gurgaon
On-site
MongoDB's mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it's no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. Summary As an Analytics Engineer at MongoDB, you will play a critical role in leveraging data to drive informed decision-making and simplify end user engagement across our most critical data sets. You will be responsible for designing, developing, and maintaining robust analytics solutions, ensuring data integrity, and enabling data-driven insights across all of MongoDB. This role requires an analytical thinker with strong technical expertise to contribute to the growth and success of the entire business. We are looking to speak to candidates who are based in Gurugram for our hybrid working model. Responsibilities Design, implement, and maintain highly performant data post-processing pipelines Create shared data assets that will act as the company's source-of-truth for critical business metrics Partner with analytics stakeholders to curate analysis-ready datasets and augment the generation of actionable insights Partner with data engineering to expose governed datasets to the rest of the organization Make impactful contributions to our analytics infrastructure, systems, and tools Create and manage documentation, and conduct knowledge sharing sessions to proliferate tribal knowledge and best practices Maintain consistent planning and tracking of work in JIRA tickets Skills & Attributes Bachelor's degree (or equivalent) in mathematics, computer science, information technology, engineering, or related discipline 2-4 years of relevant experience Strong Proficiency in SQL and experience working with relational databases Solid understanding of data modeling and ETL processes Proficiency in Python for data manipulation and analysis Familiarity with CI/CD concepts and experience with managing codebases with git Experience managing ETL and data pipeline orchestration with dbt and Airflow Familiarity with basic command line functions Experience translating project requirements into a set of technical sub-tasks that build towards a final deliverable Committed to continuous improvement, with a passion for building processes/tools to make everyone more efficient The ability to effectively collaborate cross-functionally to drive actionable and measurable results A passion for AI as an enhancing tool to improve workflows, increase productivity, and generate smarter outcomes Strong communication skills to document technical processes clearly and lead knowledge-sharing efforts across teams A desire to constantly learn and improve themselves To drive the personal growth and business impact of our employees, we're committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees' wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it's like to work at MongoDB, and help us make an impact on the world! MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter. MongoDB is an equal opportunities employer. Req ID - 2263168254
Posted 4 days ago
5.0 years
8 - 9 Lacs
Gurgaon
On-site
You Lead the Way. We’ve Got Your Back. At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether we’re supporting our customers’ financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining what’s possible — and we’re proud to back each other every step of the way. When you join #TeamAmex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. We back our colleagues with the support they need to thrive, professionally and personally. That’s why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Technology within American Express Technologies. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications: BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. 5+ years of hands-on software development experience with Big Data & Analytics solutions – Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics, CDP. Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. Design and development experience with Kafka, Real time ETL pipeline, API is desirable. Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. Certifications in cloud platform (GCP Professional Data Engineer) is a plus. Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. Strong Object-Oriented Programming skills and design patterns. Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). Good knowledge and experience with configuration management tools like GitHub Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. Looks proactively beyond the obvious for continuous improvement opportunities. Communicates effectively with product and cross functional team. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 4 days ago
16.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Principal Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Senior Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 16- 20 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills Possess the innate quality to become the go to person for any marketing presales and solution accelerator within the practise. What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 4 days ago
1.0 years
2 - 6 Lacs
Gurgaon
On-site
We are the leading provider of professional services to the middle market globally, our purpose is to instill confidence in a world of change, empowering our clients and people to realize their full potential. Our exceptional people are the key to our unrivaled, inclusive culture and talent experience and our ability to be compelling to our clients. You’ll find an environment that inspires and empowers you to thrive both personally and professionally. There’s no one like you and that’s why there’s nowhere like RSM. As a Data Operations Associate, you will support data quality and operational efficiency within our team. Your responsibilities will include working on assigned data-related tasks, assisting with process improvements, and maintaining accurate documentation. You will use tools such as CRM Dynamics and Alteryx to complete assigned tasks and contribute to data cleanup efforts. This role offers an excellent opportunity to develop expertise in data operations and make a meaningful impact on data integrity and efficiency. Essential Duties (Required duties employees must accomplish, and performance is measured) Work on assigned tickets in personal Workfront queue, ensuring timely completion and clear communication -50% Support data cleanup efforts and assist with data-related projects- 25% Create, update, and maintain team documentation to ensure accuracy and accessibility- 15% Provide support for ServiceNow tickets and Azure DevOps tasks as needed- 5% Other duties as assigned- 5% Minimum Qualifications EDUCATION/CERTIFICATIONS BA/BS degree in technology or business or equivalent practical experience TECHNICAL/SOFT SKILLS Excellent customer service skills with the ability to manage stakeholder expectations and collaborate with team members at all levels of the organization (Required) Strong attention to detail with exceptional organizational and prioritization skills (Required) Ability to thrive in a fast-paced, evolving environment and adapt to changing priorities (Required) Clear and effective communication skills, with the ability to explain complex data requirements to cross-functional teams (Required) Basic understanding of database concepts and queries (Required) Self-motivated with the ability to work independently as well as collaboratively within a team (Required) EXPERIENCE 1+ years Dynamics CRM/365 experience (Preferred) 1+ year minimum in a Data Quality/Management/Operations role (Preferred) 2+ years proven customer service skills (Required) Basic knowledge of SQL writing/optimizing queries (Preferred) Basic knowledge of database concepts (Required) Proficiency with Microsoft Office (Excel, Outlook, PowerPoint, Word) (Required) 1+ years experience working with Alteryx or other ETL tools (Preferred) 1+ years experience working with a work queue program (Ticket system) (Preferred) At RSM, we offer a competitive benefits and compensation package for all our people. We offer flexibility in your schedule, empowering you to balance life’s demands, while also maintaining your ability to serve clients. Learn more about our total rewards at https://rsmus.com/careers/india.html . RSM does not tolerate discrimination and/or harassment based on race; colour; creed; sincerely held religious beliefs, practices or observances; sex (including pregnancy or disabilities related to nursing); gender (including gender identity and/or gender expression); sexual orientation; HIV Status; national origin; ancestry; familial or marital status; age; physical or mental disability; citizenship; political affiliation; medical condition (including family and medical leave); domestic violence victim status; past, current or prospective service in the Indian Armed Forces; Indian Armed Forces Veterans, and Indian Armed Forces Personnel status ; pre-disposing genetic characteristics or any other characteristic protected under applicable provincial employment legislation. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process and/or employment/partnership. RSM is committed to providing equal opportunity and reasonable accommodation for people with disabilities. If you require a reasonable accommodation to complete an application, interview, or otherwise participate in the recruiting process, please send us an email at careers@rsmus.com .
Posted 4 days ago
7.0 years
0 Lacs
Delhi
On-site
Description Shape the Future of Work with Eptura At Eptura, we're not just another tech company—we're a global leader transforming the way people, workplaces, and assets connect. Our innovative worktech solutions empower 25 million users across 115 countries to thrive in a digitally connected world. Trusted by 45% of Fortune 500 companies, we're redefining workplace innovation and driving success for organizations around the globe. Job Description We are seeking a Technical Lead – Data Engineering to spearhead the design, development, and optimization of complex data pipelines and ETL processes . This role requires deep expertise in data modeling, cloud platforms, and automation to ensure high-quality, scalable solutions . You will collaborate closely with stakeholders, engineers, and business teams to drive data-driven decision-making across our organization. Responsibilities Work with stakeholders to understand data requirements and architect end-to-end ETL solutions . Design and maintain data models , including schema design and optimization . Develop and automate data pipelines to ensure quality, consistency, and efficiency . Lead the architecture and delivery of key modules within data platforms . Build and refine complex data models in Power BI , simplifying data structures with dimensions and hierarchies . Write clean, scalable code using Python, Scala, and PySpark (must-have skills). Test, deploy, and continuously optimize applications and systems . Mentor team members and participate in engineering hackathons to drive innovation. About You 7+ years of experience in Data Engineering , with at least 2 years in a leadership role . Strong expertise in Python, PySpark, and SQL for data processing and transformation . Hands-on experience with Azure cloud computing , including Azure Data Factory and Databricks . Proficiency in Analytics/Visualization tools : Power BI, Looker, Tableau, IBM Cognos. Strong understanding of data modeling , including dimensions and hierarchy structures . Experience working with Agile methodologies and DevOps practices (GitLab, GitHub). Excellent communication and problem-solving skills in cross-functional environments. Ability to reduce added cost, complexity, and security risks with scalable analytics solutions. Nice to have: Experience working with NoSQL databases (Cosmos DB, MongoDB) . Familiarity with AutoCAD and building systems for advanced data visualization . Knowledge of identity and security protocols , such as SAML, SCIM, and FedRAMP compliance . Benefits Health insurance fully paid–Spouse, children, and Parents Accident insurance fully paid Flexible working allowance 25 days holidays 7 paid sick days 10 public holidays Employee Assistance Program Eptura Information Follow us on Twitter | LinkedIn | Facebook | YouTube Eptura is an Equal Opportunity Employer. At Eptura we promote our flexible workspace environment, free from discrimination. We believe that diversity of experience, perspective, and background leads to a better environment for all our people and a better product for our customers. Everyone is welcome at Eptura, no matter where you are from, and the more diverse we are, the more unified we will be in ensuring respectful connections all around the world. #LI-TS1 #LI-Hybrid About Eptura Ready to make a difference? Explore opportunities with Eptura and join us on this incredible journey. Joining Eptura means becoming part of a forward-thinking, dynamic team that's on a mission to shape a better, more connected future. We're seeking passionate, driven individuals who want to make a real impact and be at the forefront of workplace innovation. At Eptura, diversity and inclusion are at the heart of what we do. We believe that embracing unique perspectives and backgrounds leads to stronger teams and better solutions for our customers. We are committed to creating a flexible, inclusive environment where everyone is welcome and empowered to succeed.
Posted 4 days ago
2.0 years
0 - 0 Lacs
Mohali
On-site
We are hiring an ETL SQL Developer with 2 years of experience in building and maintaining data pipelines. The ideal candidate should be proficient in SQL and ETL tools, with a solid understanding of relational databases. Responsibilities: Develop and manage ETL pipelines for data extraction, transformation, and loading Write and optimize complex SQL queries Ensure data accuracy, consistency, and reliability Collaborate with analysts and stakeholders to deliver data solutions Troubleshoot and improve pipeline performance Requirements: 2+ years of experience in ETL and SQL development Strong knowledge of relational databases (e.g., MySQL, PostgreSQL) Familiarity with ETL tools or scripting (e.g., Python, Talend) Basic knowledge of cloud platforms is a plus Job Type: Full-time Pay: ₹20,000.00 - ₹40,000.00 per month Benefits: Leave encashment Provident Fund Schedule: Evening shift Supplemental Pay: Overtime pay Work Location: In person
Posted 4 days ago
15.0 years
0 Lacs
Bhubaneshwar
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in ensuring that data is accessible, reliable, and ready for analysis, contributing to informed decision-making within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture and data models. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data lake architectures. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and data governance practices. - Experience with cloud platforms such as AWS or Azure. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required. 15 years full time education
Posted 4 days ago
2.0 years
0 Lacs
Raipur
On-site
Company Name- Interbiz Consulting Pvt Ltd Position/Designation- Data Engineer Job Location- Raipur (C.G.) Mode- Work from office Experience- 2 to 5 Years We are seeking a talented and detail-oriented Data Engineer to join our growing Data & Analytics team. You will be responsible for building and maintaining robust, scalable data pipelines and infrastructure to support data-driven decision-making across the organization. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory , Databricks , or Apache Spark . Work with Azure Blob Storage , Data Lake , and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka , Apache Flink , or Apache Beam . Build and schedule jobs using orchestration tools like Apache Airflow or Dagster . Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake . Manage data cataloging and lineage using tools like Marquez or Collibra . Collaborate with DevOps teams to containerize solutions using Docker , manage infrastructure with Terraform , and deploy on Kubernetes . Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills and Qualifications Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. [1–5+] years of experience in data engineering or related roles. Proficiency in Python , with strong knowledge of OOP and data structures & algorithms . Comfortable working in Linux environments for development and deployment. Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka , Flink , or Beam . Hands-on experience with Airflow , Dagster , or similar orchestration tools. Deep experience with Microsoft Azure , especially Azure Data Factory , Blob Storage , Synapse , Azure Functions , etc. AZ-900 or other Azure certifications are a plus. Knowledge of dimensional modeling , Snowflake , Apache Iceberg , and Delta Lake . Understanding of modern Lakehouse architecture and related best practices. Familiarity with Marquez , Collibra , or other cataloging tools. Experience with Terraform , Docker , Kubernetes , and Jenkins or equivalent CI/CD tools. Proficiency in setting up dashboards and alerts with Prometheus and Grafana . Interested candidates may share their CV on swapna.rani@interbizconsulting.com or visit www.interbizconsulting.com Note:- Immediate joiner will be preferred. Job Type: Full-time Pay: From ₹25,000.00 per month Benefits: Food provided Health insurance Leave encashment Provident Fund Supplemental Pay: Yearly bonus Application Question(s): Do you have at least 2 years of work experience in Python? Do you have at least 2 years of work experience in Data Science? Are you from Raipur, Chhattisgarh? Are you willing to work for more than 2 years? What is your notice period? What is your current salary and what you are expecting? Work Location: In person
Posted 4 days ago
3.0 years
0 Lacs
Chennai
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Professional, Software Development Engineering What does a successful Professional, Data Conversions doat Fiserv? A Conversion Professional is responsible for timely and accurate conversion of new and existing Bank/Client data to Fiserv systems, from both internal and external sources. This role is responsible for providing data analysis for client projects and to accommodate other ad hoc data updates to meet client requests. As part of the overall Service Delivery organization, a Conversion Professional plays a critical role in mapping in data to support project initiatives for new and existing banks. Working with financial services data means a high priority on accuracy and adherence to procedures and guidelines. What will you do A Conversion Professional is responsible for timely and accurate conversion of new and existing Bank/Client data to Fiserv systems, from both internal and external sources. This role is responsible for providing data analysis for client projects and to accommodate other ad hoc data updates to meet client requests. As part of the overall Service Delivery organization, a Conversion Professional plays a critical role in mapping in data to support project initiatives for new and existing banks. Working with financial services data means a high priority on accuracy and adherence to procedures and guidelines. The person stepping in as the backup would need to review the specifications history and then review and understand the code that was being developed to resolve the issue and or change. This would also have to occur on the switch back to the original developer. Today, the associate handling the project would log back in to support the effort and address the issue and or change. What you will need to have Bachelor’s degree in programming or related field Minimum 3 years’ relevant experience in data processing (ETL) conversions or financial services industry 3 – 5 years’ Experience and strong knowledge of MS SQL/PSQL, MS SSIS and data warehousing concepts Strong communication skills and ability to provide technical information to non-technical colleagues. Team players with ability to work independently. Experience in full software development life cycle using agile methodologies. Should have good understanding of Agile methodologies and can handle agile ceremonies. Efficient in Reviewing, coding, testing, and debugging of application/Bank programs. Should be able to work under pressure while resolving critical issues in Prod environment. Good communication skills and experience in working with Clients. Good understanding in Banking Domain. What would be great to have Experience with Informatica, Power BI, MS Visual Basic, Microsoft Access and Microsoft Excel required. Experience with Card Management systems, debit card processing is a plus Strong communication skills and ability to provide technical information to non-technical colleagues Ability to manage and prioritize work queue across multiple workstreams Team player with ability to work independently Highest attention to detail and accuracy Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 4 days ago
15.0 years
0 Lacs
Chennai
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Good To Have Skills: Experience with Apache Kafka. - Strong understanding of data warehousing concepts and architecture. - Familiarity with cloud platforms such as AWS or Azure. - Experience in SQL and NoSQL databases for data storage and retrieval. Additional Information: - The candidate should have minimum 5 years of experience in PySpark. - This position is based in Chennai. - A 15 years full time education is required. 15 years full time education
Posted 4 days ago
0 years
0 Lacs
Chennai
On-site
P2-C2-STS Examining the business needs to determine the testing technique by automation testing. Maintenance of present regression suites and test scripts is an important responsibility of the tester. The testers must attend agile meetings for backlog refinement, sprint planning, and daily scrum meetings. Testers to execute regression suites for better results. Must provide results to developers, project managers, stakeholders, and manual testers. Develop and execute test plans, test cases, and test scripts for ETL processes. Validate data extraction, transformation, and loading workflows Analyze test results and provide detailed reports to stakeholders. Automate repetitive testing tasks to improve efficiency. Strong SQL base to validate the transformations. Skill Proficiency Level expected Strong ETL Testing Strong SQL - In depth understanding of SQL queries and applying it in QA Testing. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 4 days ago
3.0 years
6 - 9 Lacs
Chennai
On-site
Join the Ford HR Management Security & Controls department! Our team is dedicated to ensuring robust and secure user access management for Human Resource (HR) applications in our global environment. We are responsible for the tools and processes that allow HR staff, Ford employees, and suppliers to request and authorize access efficiently and securely. We also maintain critical interfaces that connect our access management systems to various downstream applications. A key focus area for our team is the configuration and management of security roles within our global HR system, Oracle HCM. Oracle HCM (Human Capital Management) is Ford's comprehensive global HR platform. This includes Core HR processes (like employee data management, promotions, and internal transfers), as well as Compensation, Learning & Development, Talent Management, Recruiting and Payroll. We are looking for a skilled and experienced IT Analyst/Specialist with deep knowledge of Oracle HCM, particularly its security and access management capabilities. This role is critical to ensuring the integrity and security of our HR data and systems. You will also leverage your skills in SQL and Informatica PowerCenter to support data analysis, reporting, and ETL processes vital to our operations. You'll be joining a dynamic, globally distributed IT team with members located in the US, India, and Germany, collaborating across regions to achieve our shared goals. Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience. 3+ years experience with Oracle HCM, with a strong focus on security configuration and user access management. 3+ years experience with SQL for data querying, analysis, and manipulation. Hands-on experience designing, developing, and maintaining ETL processes (e.g. by using Informatica IICS). Understanding of data security principles and best practices, especially in an HR context. Experience troubleshooting complex technical issues related to access, security, or data integration. Strong analytical and problem-solving skills. Excellent communication and collaboration skills, comfortable working with global teams across different time zones. Desired Skills: Experience with other Oracle HCM Security Module Experience with other Oracle technologies or modules within HCM (e.g., Oracle BI Publisher). Experience working in a large, global enterprise environment. Configure, manage, and maintain security roles, profiles, and permissions within the global Oracle HCM system, ensuring compliance with security policies. Design, develop, and maintain Extract, Transform, Load (ETL) processes using Informatica PowerCenter to move and integrate data from various sources. Utilize SQL for data extraction, analysis and validation. Collaborate closely with HR functional teams and other IT teams to understand security and data requirements. Ensure implemented solutions adhere to security best practices and internal controls.
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.
These cities are known for their thriving tech industries and often have a high demand for ETL professionals.
The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.
In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect
As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.
Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)
Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.
Here are 25 interview questions that you may encounter in ETL job interviews:
As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2