Jobs
Interviews

8236 Hadoop Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Testing/Quality Assurance Main location: India, Karnataka, Bangalore Position ID: J0725-1442 Employment Type: Full Time Position Description: Position Description Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: ETL Testing Engineer Position: Senior test engineer Experience: 3-9 Years Category: Quality assurance/Software Testing. Shift: 1-10 pm/UK Shift Main location: Chennai/Bangalore. Position ID: J0725-1442 Employment Type: Full Time Position Description: We are looking for an experienced DataStage tester to join our team. The ideal candidate should be passionate about coding and testing scalable and high-performance applications Your future duties and responsibilities: Develop and execute ETL test cases to validate data extraction, transformation, and loading processes. Write complex SQL queries to verify data integrity, consistency, and correctness across source and target systems. Automate ETL testing workflows using Python, PyTest, or other testing frameworks. Perform data reconciliation, schema validation, and data quality checks. Identify and report data anomalies, performance bottlenecks, and defects. Work closely with Data Engineers, Analysts, and Business Teams to understand data requirements. Design and maintain test data sets for validation. Implement CI/CD pipelines for automated ETL testing (Jenkins, GitLab CI, etc.). Document test results, defects, and validation reports. Required qualifications to be successful in this role: ETL Testing: Strong experience in testing Informatica, Talend, SSIS, Databricks, or similar ETL tools. SQL: Advanced SQL skills (joins, aggregations, subqueries, stored procedures). Python: Proficiency in Python for test automation (Pandas, PySpark, PyTest). Databases: Hands-on experience with RDBMS (Oracle, SQL Server, PostgreSQL) & NoSQL (MongoDB, Cassandra). Big Data Testing (Good to Have): Hadoop, Hive, Spark, Kafka. Testing Tools: Knowledge of Selenium, Airflow, Great Expectations, or similar frameworks. Version Control: Git, GitHub/GitLab. CI/CD: Jenkins, Azure DevOps, or similar. Soft Skills: Strong analytical and problem-solving skills. Ability to work in Agile/Scrum environments. Good communication skills for cross-functional collaboration. Preferred Qualifications: Experience with cloud platforms (AWS, Azure). Knowledge of Data Warehousing concepts (Star Schema, Snowflake Schema). Certification in ETL Testing, SQL, or Python is a plus. Skills: Data Warehousing MS SQL Server Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 2 days ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana

Remote

Software Engineer II Hyderabad, Telangana, India Date posted Jul 31, 2025 Job number 1830824 Work site Up to 50% work from home Travel None Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Qualifications Required /Minimum Qualifications Bachelor's Degree in Computer Science, or related technical discipline AND 4+ years technical engineering experience with coding in languages like C#, React, Redux, TypeScript, JavaScript, Java or Python OR equivalent experience Experience in data integration or data migrations or ELT or ETL tooling is mandatory Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Equal Opportunity Employer (EOP) #azdat #azuredata #azdat #azuredata #microsoftfabric #dataintegration Responsibilities Build cloud scale products with focus on efficiency, reliability and security Build and maintain end-to-end Build, Test and Deployment pipelines Deploy and manage massive Hadoop, Spark and other clusters Contribute to the architecture & design of the products Triaging issues and implementing solutions to restore service with minimal disruption to the customer and business. Perform root cause analysis, trend analysis and post-mortems Owning the components and driving them end to end, all the way from gathering requirements, development, testing, deployment to ensuring high quality and availability post deployment Embody our culture and values Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 2 days ago

Apply

1.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Voyager (94001), India, Bangalore, Karnataka Associate Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 1.5 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 3+ years of experience in application development including Python, SQL, Scala, or Java 1+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 2+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 1+ years experience working on real-time data and streaming applications 1+ years of experience with NoSQL implementation (Mongo, Cassandra) 1+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with UNIX/Linux including basic commands and shell scripting 1+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).

Posted 2 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Voyager (94001), India, Bangalore, Karnataka Manager, Software Engineering - Full Stack (INSE51) Do you want to work for a tech company that writes its own code, develops its own software, and builds its own products? We experiment and innovate leveraging the latest technologies, engineer breakthrough customer experiences, and bring simplicity and humanity to banking. We make a difference for 65 million customers. At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who love to solve real problems and meet real customer needs. We want you to be curious and ask “what if?” Capital One started as an information strategy company that specialized in credit cards, and we have become one of the most impactful and disruptive players in the industry. We have grown to see ourselves as a technology company in consumer finance, with great opportunities for software engineers who want to build innovative applications to give users smarter ways to save, transact, borrow and invest their money, as we seek to disrupt the industry again. As a Capital One Software Engineer, you'll work on everything from customer-facing web and mobile applications using cutting-edge open source frameworks, to highly-available RESTful services, to back-end Java based systems using the hottest techniques in Big Data. You'll bring solid experience in emerging and traditional technologies such as: node.js, Java, AngularJS, React, Python, REST, JSON, XML, Ruby, HTML / HTML5, CSS, NoSQL databases, relational databases, Hadoop, Chef, Maven, iOS, Android, and AWS/Cloud Infrastructure to name a few. You will: Work with product owners to understand desired application capabilities and testing scenarios Continuously improve software engineering practices Work within and across Agile teams to design, develop, test, implement, and support technical solutions across a full-stack of development tools and technologies Lead the craftsmanship, availability, resilience, and scalability of your solutions Bring a passion to stay on top of tech trends, experiment with and learn new technologies, participate in internal & external technology communities, and mentor other members of the engineering community Encourage innovation, implementation of cutting-edge technologies, inclusion, outside-of-the-box thinking, teamwork, self-organization, and diversity - Lead and/or mentor a team of engineers Basic Qualifications: Bachelor's Degree At least 7 years experience in software development At least 2 years experience in people management Preferred Qualifications: Masters Degree 8+ years experience in software development 7+ years experience in Agile practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).

Posted 2 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Voyager (94001), India, Bangalore, Karnataka Manager, Data Engineering Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 4 years of experience in application development (Internship experience does not apply) At least 2 years of experience in big data technologies At least 1 year experience with cloud computing (AWS, Microsoft Azure, Google Cloud) At least 2 years of people management experience Preferred Qualifications: 7+ years of experience in application development including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data warehousing experience (Redshift or Snowflake) 4+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Senior Data Scientist, you will utilize your expertise in Python, Machine Learning (ML), Natural Language Processing (NLP), Generative AI (GenAI), and Azure Cloud Services to design, develop, and deploy advanced AI/ML models. Your role will involve solving complex business problems by building, training, and optimizing AI models using Python and relevant ML frameworks. You will implement Azure AI/ML services for scalable deployment and integrate APIs for real-time model inference. Working with large-scale data, you will extract insights to drive strategic initiatives and collaborate with cross-functional teams to integrate AI/ML solutions into applications. Your responsibilities will include designing and developing ML, NLP, and GenAI models, implementing Azure AI/ML services, developing APIs for model deployment, and working with large-scale data to extract insights. You will collaborate with cross-functional teams, implement CI/CD pipelines, ensure adherence to software engineering best practices, stay updated on AI/ML advancements, conduct research on emerging trends, provide technical mentorship, and optimize model performance in a production environment. To excel in this role, you must have proficiency in Python and ML frameworks like TensorFlow, PyTorch, or Scikit-learn, hands-on experience in NLP techniques, expertise in Generative AI models, knowledge of Azure AI/ML services, and experience in developing APIs for model deployment. Familiarity with CI/CD pipelines, software engineering principles, Agile methodologies, statistical analysis, data mining, and data visualization is essential. Preferred qualifications include experience in MLOps, knowledge of vector databases, and exposure to big data processing frameworks. Joining our team as a Senior Data Scientist will offer you a full-time or part-time permanent position with benefits such as health insurance and provident fund. You will work day shifts from Monday to Friday with weekend availability and have the opportunity for performance bonuses. With a minimum of 8 years of experience in Python, Azure AI/ML services, and Senior Data Scientist roles, you will contribute to innovative solutions and drive data-driven decision-making in a collaborative environment.,

Posted 2 days ago

Apply

3.0 - 5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. As an Infrastructure Engineer, you will be responsible for the technical design, planning, implementation, and optimization of performance tuning and recovery procedures for critical enterprise systems and applications. You will serve as the technical authority in system administration for complex SaaS, local, and cloud-based environments. Your role is critical in ensuring the high availability, reliability, and scalability of our infrastructure components. You will also be involved in designing philosophies, tools, and processes to enable the rapid delivery of evolving products. In This Role You Will Design, configure, and document cloud-based infrastructures using AWS Virtual Private Cloud (VPC) and EC2 instances in AWS. Secure and monitor hosted production SaaS environments provided by third-party partners. Define, document, and manage network configurations within AWS VPCs and between VPCs and data center networks, including firewall, DNS, and ACL configurations. Lead the design and review of developer work on DevOps tools and practices. Ensure high availability and reliability of infrastructure components through monitoring and performance tuning. Implement and maintain security measures to protect infrastructure from threats. Collaborate with cross-functional teams to design and deploy scalable solutions. Automate repetitive tasks and improve processes using scripting languages such as Python, PowerShell, or BASH. Support Airflow DAGs in the Data Lake, utilizing the Spark framework and Big Data technologies. Provide support for infrastructure-related issues and conduct root cause analysis. Develop and maintain documentation for infrastructure configurations and procedures. Administer databases, handle data backups, monitor databases, and manage data rotation. Work with RDBMS and NoSQL systems, leading stateful data migration between different data systems. Experience & Qualifications Bachelor’s or Master’s degree in Information Science, Computer Science, Business, or equivalent work experience. 3-5 years of experience with Amazon Web Services, particularly VPC, S3, EC2, and EMR. Experience in setting up new VPCs and integrating them with existing networks is highly desirable. Experience in maintaining infrastructure for Data Lake/Big Data systems built on the Spark framework and Hadoop technologies. Experience with Active Directory and LDAP setup, maintenance, and policies. Workday certification is preferred but not required. Exposure to Workday Integrations and Configuration is preferred. Strong knowledge of networking concepts and technologies. Experience with infrastructure automation tools (e.g., Terraform, Ansible, Chef). Familiarity with containerization technologies like Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Strong verbal and written communication skills. Understanding of Agile project methodologies, including Scrum and Kanban, is required. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

You have experience in ETL testing and are familiar with Agile methodology. With a minimum of 4-6 years of testing experience in test planning & execution, you possess working knowledge in Database testing. It would be advantageous if you have prior experience in the auditing domain. Your strong application analysis, troubleshooting, and behavioral skills along with extensive experience in manual testing will be valuable. While experience in Automation scripting is not mandatory, it would be beneficial. You are adept at leading discussions with Business, Development, and vendor teams for testing activities such as Defect Coordinator and test scenario reviews. Your excellent verbal and written communication skills enable you to effectively communicate with various stakeholders. You are capable of working independently and collaboratively with onshore and offshore teams. The role requires an experienced ETL developer with proficiency in Big Data technologies like Hadoop. Key Skills Required: - Hadoop (Horton Works), HDFS - Hive, Pig, Knox, Ambari, Ranger, Oozie - TALEND, SSIS - MySQL, MS SQL Server, Oracle - Windows, Linux Being open to working in 2nd shifts (1pm - 10pm) is essential for this role. Your excellent English communication skills will be crucial for effective collaboration. If you are interested, please share your profile on mytestingcareer.com. When responding, kindly include your Current CTC, Expected CTC, Notice Period, Current Location, and Contact number.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As the Manager, Data Scientist at our organization, you will play a crucial role in the Product Data & Analytics team. This team focuses on building internal analytic partnerships to enhance the health of the business, optimize revenue opportunities, track initiatives, develop new products, and formulate Go-To Market strategies. Your enthusiasm for Data Assets and your commitment to data-driven decision-making will be instrumental in driving the success of our Global Analytics team, which serves end users across 6 continents. You will be the key resource for data analytics within the company, leveraging your expertise to identify solutions in vast data sets and transform insights into strategic opportunities. In this role, you will collaborate closely with the global pricing and interchange team to create analytic solutions using complex statistical and data science techniques. Your responsibilities will include developing dashboards, prototypes, and other tools to communicate data insights effectively across products, markets, and services. Leading cross-functional projects, you will utilize advanced data modeling and analysis techniques to uncover valuable insights that inform strategic decisions and optimization opportunities. Additionally, you will translate business requirements into technical specifications, ensure timely deliverables, and uphold quality standards in data manipulation and analysis. Your role will also involve recruiting, training, developing, and supervising analyst-level employees. You will be responsible for presenting findings and insights to stakeholders through various platforms such as Tableau, PowerBI, Excel, and PowerPoint. Furthermore, you will conduct quality control, data validation, and cleansing processes on both new and existing data sources. The ideal candidate for this position holds a strong academic background in Computer Science, Data Science, Technology, mathematics, statistics, or related fields. Proficiency in tools such as Alteryx, Python/Spark, Hadoop platforms, and advanced SQL is essential for building Big Data products and platforms. Experience in interacting with stakeholders, crafting narratives on product value, and contributing to product optimization efforts is highly valued. Additionally, familiarity with Enterprise Business Intelligence platforms like Tableau and PowerBI is advantageous, along with knowledge of ML frameworks, data structures, and software architecture. To succeed in this role, you must possess excellent English communication skills, strong analytical abilities, attention to detail, creativity, and self-motivation. Your capacity to manage multiple tasks, operate in a fast-paced environment, and collaborate effectively with diverse teams will be critical. A Bachelor's or Master's Degree in Computer Science, Information Technology, Engineering, Mathematics, Statistics, or a related field is required, with additional certifications being a plus. If you are a proactive individual with a passion for data analytics and a drive to excel in a dynamic environment, we invite you to consider this exciting opportunity to join our team as the Manager, Data Scientist.,

Posted 2 days ago

Apply

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Lead Java Engineer at NationsBenefits, you will be at the forefront of transforming the insurance industry through innovative benefits management solutions. You will play a crucial role in modernizing complex back-office systems to create scalable, secure, and high-performing platforms that streamline operations for our clients. With a focus on platform modernization, you will lead the transition of legacy systems to modern, cloud-native architectures that support scalability, reliability, and high performance in the insurance domain. Your primary responsibility will be to spearhead the development of a cutting-edge FinTech application. This hands-on leadership role requires deep technical expertise in Java, including Spring Boot, Lombok, and JDK 17+, as well as strong teambuilding, mentoring, and cross-functional collaboration skills. Working closely with product managers, business leaders, and engineers, you will design, develop, and deploy scalable financial solutions. Key responsibilities include leading, mentoring, and growing a team of high-performing engineers, recruiting and training top engineering talent, defining and executing the technical strategy and architecture for the FinTech application, leading the design, development, and deployment of Java-based microservices, ensuring compliance with financial regulations and data security standards, and promoting a culture of continuous learning and process enhancement within the engineering team. To succeed in this role, you should have a Bachelor's degree in computer science or a related field, along with 8+ years of experience in Java development and 3+ years of leadership experience. You should also have expertise in Java, microservices architecture, RESTful APIs, SQL/NoSQL databases, cloud platforms like AWS, GCP, or Azure, Agile methodologies, and secure application design and deployment. Joining NationsBenefits offers you the opportunity to lead a pioneering FinTech initiative with cutting-edge technologies, grow your career in a fast-paced, innovative environment, impact the financial ecosystem by building secure, high-performance applications, and be part of a team that is passionate about driving technical excellence. If you are ready to build world-class FinTech solutions, lead exceptional teams, and make a difference in the industry, we encourage you to apply now.,

Posted 2 days ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a senior-level Data Engineer with Machine Learning Analyst capabilities, you will play a crucial role in leading the architecture, development, and management of scalable data solutions. Your expertise in data architecture, big data pipeline development, and data quality enhancement will be key in processing large-scale datasets and supporting machine learning workflows. Your key responsibilities will include designing, developing, and maintaining end-to-end data pipelines for ingestion, transformation, and delivery across various business systems. You will ensure robust data quality, data lineage, data reconciliation, and governance practices. Additionally, you will architect and manage data warehouse and big data solutions supporting both structured and unstructured data. Optimizing and automating ETL/ELT processes for high-volume data environments will be essential, with a focus on processing 5B+ records. Collaborating with data scientists and analysts to support machine learning workflows and implementing streamlined DAAS workflows will also be part of your role. To succeed in this position, you must have at least 10 years of experience in data engineering, including data architecture and pipeline development. Your proven experience with Spark and Hadoop clusters for processing large-scale datasets, along with a strong understanding of ETL frameworks, data quality processes, and automation best practices, will be critical. Experience in data ingestion, lineage, governance, and reconciliation, as well as a solid understanding of data warehouse design principles and data modeling, are must-have skills. Expertise in automated data processing, especially for DAAS platforms, is essential. Desirable skills for this role include experience with Apache HBase, Apache NiFi, and other Big Data tools, knowledge of distributed computing principles and real-time data streaming, familiarity with machine learning pipelines and supporting data structures, and exposure to data cataloging and metadata management tools. Proficiency in Python, Scala, or Java for data engineering tasks is also beneficial. In addition to technical skills, soft skills such as a strong analytical and problem-solving mindset, excellent communication skills for collaboration across technical and business teams, and the ability to work independently, manage multiple priorities, and lead data initiatives are required. If you are excited about the opportunity to work as a Data Engineer with Machine Learning Analyst capabilities and possess the necessary skills and experience, we look forward to receiving your application.,

Posted 2 days ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

Agoda is an online travel booking platform offering accommodations, flights, and more, connecting travelers with a global network of 4.7M hotels and holiday properties worldwide, as well as flights, activities, and additional services. As part of Booking Holdings and based in Asia, we boast a diverse team of 7,100+ employees from 95+ nationalities across 27 markets, fostering an environment rich in diversity, creativity, and collaboration. We thrive on innovation through a culture of experimentation and ownership, aiming to enhance our customers" ability to explore the world. Our Purpose: Bridging the World Through Travel We believe that travel enables people to enjoy, learn, and experience the wonders of our world, bringing individuals and cultures closer together, promoting empathy, understanding, and happiness. Join our skillful, driven, and diverse team from various parts of the globe, united by a shared passion to make a positive impact. Leveraging our innovative technologies and strong partnerships, we strive to simplify and enhance travel experiences for all. The Opportunity: Agoda seeks developers to contribute to mission-critical systems involved in designing and developing APIs that cater to millions of user search requests daily. In this Role, you will: - Lead the development of features, experiments, technical projects, and complex systems - Serve as a technical architect, mentor, and advocate for the right technology choices - Continuously refine our architecture and enhance software development - Play a significant role in our agile and scrum practices - Collaborate with server, client, and infrastructure teams to deliver optimal solutions - Proactively seek ways to enhance our products, codebase, and development processes - Write high-quality code and support others in doing the same - Drive technical decisions within the organization What You'll Need to Succeed: - Over 7 years of experience in developing performance-critical applications in a production environment using Scala, Java, or C# - Proficiency in leading projects, initiatives, and teams with full ownership of the systems involved - Familiarity with data platforms like SQL, Cassandra, or Hadoop - Strong understanding of algorithms and data structures - Excellent coding skills - Passion for software development, continuously striving to enhance knowledge and skills - Proficient in verbal and written English communication Preferred Qualifications: - Experience with Scrum/Agile development methodologies - Background in developing large-scale distributed products - Hands-on experience with core engineering infrastructure tools like Git, TeamCity, and Puppet - Familiarity with technologies such as queueing systems, Spark, Hadoop, NoSQL databases, Play framework, and Akka library Agoda is an Equal Opportunity Employer. Your application will be kept on file for future opportunities, and you can request removal of your details at any time. For more information, please refer to our privacy policy.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Be a part of a dynamic team and excel in an environment that values diversity and creativity. Continue to sharpen your skills and ambition while pushing the industry forward. As a Data Architect at JPMorgan Chase within the Employee Platforms, you serve as a seasoned member of a team to develop high-quality data architecture solutions for various software applications and platforms. By incorporating leading best practices and collaborating with teams of architects, you are an integral part of carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. In this role, you will be responsible for designing and implementing data models that support our organization's data strategy. You will work closely with Data Product Managers, Engineering teams, and Data Governance teams to ensure the delivery of high-quality data products that meet business needs and adhere to best practices. Job responsibilities include: - Executing data architecture solutions and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions and break down problems. - Collaborating with Data Product Managers to understand business requirements and translate them into data modeling specifications. Conducting interviews and workshops with stakeholders to gather detailed data requirements. - Creating and maintaining data dictionaries, entity-relationship diagrams, and other documentation to support data models. - Producing secure and high-quality production code and maintaining algorithms that run synchronously with appropriate systems. - Evaluating data architecture designs and providing feedback on recommendations. - Representing the team in architectural governance bodies. - Leading the data architecture team in evaluating new technologies to modernize the architecture using existing data standards and frameworks. - Gathering, analyzing, synthesizing, and developing visualizations and reporting from large, diverse data sets in service of continuous improvement of data frameworks, applications, and systems. - Proactively identifying hidden problems and patterns in data and using these insights to drive improvements to coding hygiene and system architecture. - Contributing to data architecture communities of practice and events that explore new and emerging technologies. Required qualifications, capabilities, and skills: - Formal training or certification in Data Architecture and 3+ years of applied experience. - Hands-on experience in data platforms, cloud services (e.g., AWS, Azure, or Google Cloud), and big data technologies. - Strong understanding of database management systems, data warehousing, and ETL processes. - Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams. - Knowledge of data governance principles and best practices. - Ability to evaluate current technologies to recommend ways to optimize data architecture. - Hands-on practical experience in system design, application development, testing, and operational stability. - Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming and database querying languages. - Overall knowledge of the Software Development Life Cycle. - Solid understanding of agile methodologies such as continuous integration and delivery, application resiliency, and security. - Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills: - Experience with cloud-based data platforms (e.g., AWS, Azure, Google Cloud). - Familiarity with big data technologies (e.g., Hadoop, Spark). - Certification in data modeling or data architecture.,

Posted 3 days ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Job Responsibilities Collaborate with data scientists, software engineers, and business stakeholders to understand data requirements and design efficient data models. Develop, implement, and maintain robust and scalable data pipelines, ETL processes, and data integration solutions. Extract, transform, and load data from various sources, ensuring data quality, integrity, and consistency. Optimize data processing and storage systems to handle large volumes of structured and unstructured data efficiently. Perform data cleaning, normalization, and enrichment tasks to prepare datasets for analysis and modelling. Monitor data flows and processes, identify and resolve data-related issues and bottlenecks. Contribute to the continuous improvement of data engineering practices and standards within the organization. Stay up-to-date with industry trends and emerging technologies in data engineering, artificial intelligence, and dynamic pricing Candidate Profile Strong passion for data engineering, artificial intelligence, and problem-solving. Solid understanding of data engineering concepts, data modeling, and data integration techniques. Proficiency in programming languages such as Python, SQL and Web Scrapping. Understanding of databases like No Sql , relational database, In Memory database and technologies like MongoDB, Redis, Apache Spark would be add on.. Knowledge of distributed computing frameworks and big data technologies (e.g., Hadoop, Spark) is a plus. Excellent analytical and problem-solving skills, with a keen eye for detail. Strong communication and collaboration skills, with the ability to work effectively in a team- oriented environment. Self-motivated, quick learner, and adaptable to changing priorities and technologies. (ref:hirist.tech)

Posted 3 days ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Job Description The primary role of the Software Developer will be to carry out a variety of software/web application development activities to support internal and external projects, including but not limited to : Job Responsibilities Facilitate solution efficiencies, scalability and technology stack leadership. Ensure foolproof and robust applications through unit tests and other quality control measures. Follow an agile development process, and enable rapid solutions to business challenges. Take inputs from internal and external clients and constantly strive to improve solutions. Follow software design, development, testing and documentation best practices. Data engineering : Extract and parse data from online and local data sources; Clean up data, audit data for accuracy, consistency and completeness. Use tools such as - but not limited to - (Excel, SQL, Python, SAS, R, MATLAB, etc.) and extract valuable and actionable insights. Data processing and visualization : Summarize insights in a simple yet powerful chart, reports, slides, etc. Data storage and management : MySQL (AWS RDS) MongoDB. Application frameworks : React, React Native, Django. Data Integration technologies : RESTful APIs, AWS S3 and UI data uploads. Project operations : For internal and external client projects, use our proprietary tools for performing data engineering, analytics and visualization activities. Responsible for project deliveries, escalation, continuous improvement, and customer success. Modify the software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Follow the system testing and validation procedures. Candidate Profile Strong knowledge of MVC frameworks, SQL coding, and at least one of the following : AngularJS, Django, Flask, Ruby on Rails, and NodeJS are a must. Proficiency in software development. Strong in algorithm design, database programming (RDBMS), and text analytics. Knowledge of NoSQL and Big Data technologies like MongoDB, Apache Spark, Hadoop stack, and Python data science stack is a plus. High problem-solving skills : able to logically break down problems into incremental milestones, prioritize high-impact deliverables first, identify bottlenecks and work around them. Self-learner : highly curious, self-starter and can work with minimum supervision and guidance. An entrepreneurial mind set with a positive attitude is a must. Track record of excellence in academics or non-academic areas, with significant accomplishments. Excellent written & oral communication and interpersonal skills, with a high degree of comfort working in teams and making teams successful. (ref:hirist.tech)

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a skilled professional in the field of Big Data and Analytics, you will be responsible for utilizing your expertise to drive impactful solutions for Standard Chartered Bank. Your role will involve leveraging your proficiency in various technologies and frameworks such as Hadoop, HDFS, HIVE, SPARK, Bash Scripting, SQL, and others. Your ability to handle raw and unstructured data while adhering to coding standards and software development life cycles will be crucial in ensuring the success of the projects you are involved in. In addition to your technical skills, you will also play a key role in Regulatory & Business Conduct by embodying the highest standards of ethics and compliance. Your responsibilities will include identifying and mitigating risks, as well as ensuring compliance with relevant laws and regulations. Collaborating effectively with FCSO development teams and FCSO Business stakeholders will be essential to achieving the desired outcomes. Your technical competencies in areas such as Hadoop, Apache Hive, PySpark, SQL, Azure DevOps, and Control M will be instrumental in fulfilling the responsibilities of this role. Your action-oriented approach, ability to collaborate, and customer focus will further contribute to your success in this position. Standard Chartered Bank is committed to fostering a diverse and inclusive work environment where each individual's unique talents are celebrated. By joining our team, you will have the opportunity to make a positive impact and drive commerce and prosperity through our valued behaviors. If you are passionate about utilizing your skills to create meaningful change and grow professionally, we invite you to be a part of our dynamic team at Standard Chartered Bank.,

Posted 3 days ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modelling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus · Must be team oriented with strong collaboration, prioritization, and adaptability skills required Mandatory skill sets: Azure Databricks Preferred skill sets: Azure Databricks Years of experience required: 7-10 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

guwahati, assam

On-site

You are an experienced Software Engineer specializing in Machine Learning with at least 2+ years of relevant experience. In this role, you will be responsible for designing, developing, and optimizing machine learning solutions and data systems. Your proven track record in implementing ML models, building scalable systems, and collaborating with cross-functional teams will be essential in solving complex challenges using data-driven approaches. As a Software Engineer - Machine Learning, your primary responsibilities will include designing and implementing end-to-end machine learning solutions, building and optimizing scalable data pipelines, collaborating with data scientists and product teams, monitoring and optimizing deployed models, staying updated with the latest trends in machine learning, debugging complex issues related to ML systems, and documenting processes for knowledge sharing and clarity. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or related fields. Your technical skills should include a strong proficiency in Python and machine learning libraries such as TensorFlow, PyTorch, or scikit-learn, experience with data processing tools like Pandas, NumPy, and Spark, proficiency in SQL and database systems, hands-on experience with cloud platforms (AWS, GCP, Azure), familiarity with CI/CD pipelines and Git, and experience with model deployment frameworks like Flask, FastAPI, or Docker. Additionally, you should possess strong analytical skills, leadership abilities to guide junior team members, and a proactive approach to learning and collaboration. Preferred qualifications include experience with MLOps tools like MLflow, Kubeflow, or SageMaker, knowledge of big data technologies such as Hadoop, Spark, or Kafka, familiarity with advanced ML techniques like NLP, computer vision, or reinforcement learning, and experience in designing and managing streaming data workflows. Key Performance Indicators for this role include successfully delivering optimized and scalable ML solutions within deadlines, maintaining high model performance in production environments, and ensuring seamless integration of ML models with business applications. Join us in this exciting opportunity to drive innovation and make a significant impact in the field of Machine Learning.,

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Sykatiya Technology Pvt Ltd is a leading Semiconductor Industry innovator committed to leveraging cutting-edge technology to solve complex problems. We are currently looking for a highly skilled and motivated Data Scientist to join our dynamic team and contribute to our mission of driving innovation through data-driven insights. As the Lead Data Scientist and Machine Learning Engineer at Sykatiya Technology Pvt Ltd, you will play a crucial role in analyzing large datasets to uncover patterns, develop predictive models, and implement AI/ML solutions. Your responsibilities will include working on projects involving neural networks, deep learning, data mining, and natural language processing (NLP) to drive business value and enhance our products and services. Key Responsibilities: - Lead the design and implementation of machine learning models and algorithms to address complex business problems. - Utilize deep learning techniques to enhance neural network models and enhance prediction accuracy. - Conduct data mining and analysis to extract actionable insights from both structured and unstructured data. - Apply natural language processing (NLP) techniques for advanced text analytics. - Develop and maintain end-to-end data pipelines, ensuring data integrity and reliability. - Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. - Mentor and guide junior data scientists and engineers in best practices and advanced techniques. - Stay updated with the latest advancements in AI/ML, neural networks, deep learning, data mining, and NLP. Technical Skills: - Proficiency in Python and its libraries such as NumPy, pandas, sci-kit-learn, TensorFlow, Keras, and PyTorch. - Strong understanding of machine learning algorithms and techniques. - Extensive experience with neural networks and deep learning frameworks. - Hands-on experience with data mining and analysis techniques. - Proficiency in natural language processing (NLP) tools and libraries like NLTK, spaCy, and transformers. - Proficiency in Big Data Technologies including Sqoop, Hadoop, HDFS, Hive, and PySpark. - Experience with Cloud Platforms such as AWS services like S3, Step Functions, EventBridge, Athena, RDS, Lambda, and Glue. - Strong knowledge of Database Management systems like SQL, Teradata, MySQL, PostgreSQL, and Snowflake. - Familiarity with Other Tools like ExactTarget, Marketo, SAP BO, Agile, and JIRA. - Strong Analytical Skills to analyze large datasets and derive actionable insights. - Excellent Problem-Solving Skills with the ability to think critically and creatively. - Effective Communication Skills and teamwork abilities to collaborate with various stakeholders. Experience: - At least 8 to 12 years of experience in a similar role.,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

indore, madhya pradesh

On-site

You should have 6-8 years of hands-on experience with Big Data technologies such as pySpark (Data frame and SparkSQL), Hadoop, and Hive. Additionally, you should possess good hands-on experience with python and Bash Scripts, along with a solid understanding of SQL and data warehouse concepts. Strong analytical, problem-solving, data analysis, and research skills are crucial for this role. It is essential to have a demonstrable ability to think creatively and independently, beyond relying solely on readily available tools. Excellent communication, presentation, and interpersonal skills are a must for effective collaboration within the team. Hands-on experience with Cloud Platform provided Big Data technologies like IAM, Glue, EMR, RedShift, S3, and Kinesis is required. Experience in orchestrating with Airflow and any job scheduler is highly beneficial. Familiarity with migrating workloads from on-premise to cloud and cloud to cloud migrations is also desired. In this role, you will be responsible for developing efficient ETL pipelines based on business requirements while adhering to development standards and best practices. Integration testing of different pipelines in AWS environment and providing estimates for development, testing, and deployments on various environments will be part of your responsibilities. Participation in code peer reviews to ensure compliance with best practices is essential. Creating cost-effective AWS pipelines using necessary AWS services like S3, IAM, Glue, EMR, Redshift, etc., is a key aspect of this position. Your experience should range from 6 to 8 years in relevant fields. The job reference number for this position is 13024.,

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Data Science is all about breaking new ground to enable businesses to answer their most urgent questions. Pioneering massively parallel data-intensive analytic processing, the mission is to develop a whole new approach to generating meaning and value from petabyte-scale data sets and shape brand new methodologies, tools, statistical methods, and models. In collaboration with leading academics, industry experts, and highly skilled engineers, the focus is on equipping customers to generate sophisticated new insights from the biggest of big data. Join to do the best work of your career and make a profound social impact as a Senior Data Engineering Advisor on the Data Engineering Team in Bangalore. As a Senior Data Engineering Advisor, you will be responsible for developing technical tools and programs to automate the data management process, integrating medium to large structured data sets. You will have the opportunity to partner with Data Scientists, Architects, or Businesses to design strategic projects and improve complex processes. You will: - Design and build analytics solutions that deliver transformative insights from extremely large data sets. - Design, develop, and implement web applications for self-service delivery of analytics. - Design and develop APIs, database objects, machine learning algorithms, and necessary server-side code to support applications. - Work closely with team members to quickly integrate new components and features into the current application ecosystem. - Continuously evaluate industry trends for opportunities to utilize new technologies and methodologies and implement these into the solution stack as appropriate. Every team member brings something unique to the table. Here's what is required for this role: Essential Requirements: - Bachelor's or advanced degree in Computer Science, Applied Mathematics, Engineering, or related field with 8 to 12 years of experience in using data technologies to deliver cutting-edge business intelligence solutions. - Strong communications and presentation skills, with the ability to articulate big data platforms and solution designs to both technical and non-technical stakeholders. - Knowledge in key data engineering technologies, including Big Data Tools, Cloud Services, Object-Oriented or Object Function Scripting Languages. - Working knowledge of statistics and machine learning concepts. - Demonstrated ability to write, tune, and debug performant SQL. - Experience with related technologies such as C#, .Net, HTML5, JavaScript, SQL Server, TeraData, Hadoop, Spark, and R. - Demonstrated ability creating rich web interfaces using a modern client-side framework. - Strong knowledge of computer science fundamentals, and demonstrated ability in applying these effectively in the real world. Desirable Requirements: - Bachelor's degree. Dell Technologies believes that each team member has the power to make an impact. The organization puts team members at the center of everything they do with opportunities to grow careers with some of the best minds and most advanced tech in the industry. Application closing date: 15 July 2025. Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment.,

Posted 3 days ago

Apply

10.0 - 18.0 years

0 Lacs

indore, madhya pradesh

On-site

You should possess a BTech degree in computer science, engineering, or a related field of study, or have 12+ years of related work experience. Additionally, you should have at least 7 years of design and implementation experience with large-scale data-centric distributed applications. It is essential to have professional experience in architecting and operating cloud-based solutions, with a good understanding of core disciplines such as compute, networking, storage, security, and databases. A strong grasp of data engineering concepts like storage, governance, cataloging, data quality, and data modeling is required. Familiarity with various architecture patterns like data lake, data lake house, and data mesh is also important. You should have a good understanding of Data Warehousing concepts and hands-on experience with tools like Hive, Redshift, Snowflake, and Teradata. Experience in migrating or transforming legacy customer solutions to the cloud is highly valued. Moreover, experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone is necessary. A thorough understanding of Big Data ecosystem technologies such as Hadoop, Spark, Hive, and HBase, along with other relevant tools and technologies, is expected. Knowledge in designing analytical solutions using AWS cognitive services like Textract, Comprehend, Rekognition, and Sagemaker is advantageous. You should also have experience with modern development workflows like git, continuous integration/continuous deployment pipelines, static code analysis tooling, and infrastructure-as-code. Proficiency in a programming or scripting language like Python, Java, or Scala is required. Possessing an AWS Professional/Specialty certification or relevant cloud expertise is a plus. In this role, you will be responsible for driving innovation within the Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, and managing high ambiguity are essential skills for this position. You will collaborate with business stakeholders, participate in various architectural, design, and status calls, and showcase good presentation skills when interacting with executives, IT Management, and developers. Furthermore, you will drive technology/software sales or pre-sales consulting discussions, ensure end-to-end ownership of tasks, and maintain high-quality software development with complete documentation and traceability. Fulfilling organizational responsibilities, sharing knowledge and experience with other teams/groups, conducting technical training sessions, and producing whitepapers, case studies, and blogs are also part of this role. The ideal candidate for this position should have 10 to 18 years of experience and be able to reference the job with the number 12895.,

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The team at Walmart Global Tech builds reusable technologies to assist in acquiring customers, onboarding and empowering merchants, and ensuring a seamless experience for all stakeholders. They focus on optimizing tariffs and assortment while maintaining Walmart's philosophy of Everyday Low Cost. Additionally, the team creates personalized omnichannel experiences for customers across various platforms. The Marketplace team serves as the gateway for Third-Party sellers, enabling them to manage their onboarding, catalog, orders, and returns. They are responsible for designing, developing, and operating large-scale distributed systems using cutting-edge technologies. As a Staff Data Scientist at Walmart Global Tech, you will be responsible for developing scalable end-to-end data science solutions for data products. You will collaborate with data engineers and analysts to build ML- and statistics-driven data quality workflows. Your role will involve solving business problems by scaling advanced Machine Learning algorithms on large datasets. You will own the MLOps lifecycle, from data monitoring to model lifecycle management. Additionally, you will demonstrate thought leadership by consulting with product and business stakeholders to deploy machine learning solutions. The ideal candidate should have knowledge of machine learning and statistics, experience with web service standards, and proficiency in architecting solutions with Continuous Integration and Continuous Delivery. Strong coding skills in Python, experience with Big Data technologies like Hadoop and Spark, and the ability to work in a big data ecosystem are preferred. Candidates should have experience in developing and deploying machine learning solutions and collaborating with data scientists. Effective communication skills and the ability to present complex ideas clearly are also desired. Educational qualifications in Computer Science, Statistics, Engineering, or related fields are preferred. Candidates with prior experience in Delivery Promise Optimization or Supply Chain domains and hands-on experience in Spark or similar frameworks will be given preference. At Walmart Global Tech, you will have the opportunity to work in a dynamic environment where your contributions can impact millions of people. The team values innovation, collaboration, and continuous learning. Join us in reimagining the future of retail and making a difference on a global scale. Walmart Global Tech offers a hybrid work model that combines in-office and virtual presence. The company provides competitive compensation, incentive awards, and a range of benefits including maternity leave, health benefits, and more. Walmart fosters a culture of belonging where every associate is valued and respected. The company is committed to creating opportunities for all associates, customers, and suppliers. As an Equal Opportunity Employer, Walmart believes in understanding, respecting, and valuing the uniqueness of individuals while promoting inclusivity.,

Posted 3 days ago

Apply

5.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior Big Data Engineer specializing in AWS and Scala, you will be responsible for designing, architecting, and implementing scalable data engineering solutions. Your role will involve building and optimizing Big Data architectures using AWS services for large-scale data processing, developing data pipelines with Spark (Scala), and operationalizing data engineering and analytics platforms. You will work on real-time streaming solutions using Kafka and AWS Kinesis, support ML model operationalization on AWS, and ensure high-quality, reliable data delivery for business needs by analyzing data flows and writing efficient SQL queries. Collaboration with cross-functional teams is essential to provide technical leadership, mentor team members, and enhance data engineering capabilities. Troubleshooting and resolving technical issues to ensure scalability, performance, and security of data solutions is also a key part of the role. The ideal candidate for this position should have 5+ years of hands-on experience in Big Data Technologies, specifically AWS, Scala, Hadoop, and Spark. Proven expertise in Spark with Scala is mandatory, along with practical experience in various AWS services such as EMR, Glue, Lambda, S3, CloudFormation, API Gateway, Athena, and Lake Formation. If you meet the mandatory skills and qualifications and are eager to contribute to a dynamic team environment, we encourage you to share your resume with us at Aarushi.Shukla@coforge.com.,

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Architect Vice President based in Chennai, you will play a crucial role in designing, developing, and implementing solutions to solve complex business problems. Your primary responsibility will be collaborating with stakeholders to understand their needs and requirements, and designing and implementing solutions that meet those needs while creating solutions that balance technology risks against business delivery. You will be driving consistency in data architecture and platform design, ensuring they align with policy and technical data standards. Your role will involve translating business/use case requirements into logical and physical data models, serving as the foundation for data engineers to build data products. This includes capturing requirements from business teams, translating them into data models while considering performance implications, and testing models with data engineers. Continuous monitoring and optimization of the performance of these models will be essential to ensure efficient data retrieval and processing. You will collaborate with the CDA team to design data product solutions, covering data architecture, platform design, and integration patterns. Additionally, you will work with the technical product lead on data governance requirements, including data ownership of data assets and data quality lineage and standards. Partnering with business stakeholders to understand their data needs and desired functionality for the data product will also be a key aspect of your role. To be successful in this role, you should have experience with cloud platform expertise (specifically AWS), big data technologies such as Hadoop and data warehousing analytics like Teradata and Snowflake processes, SQL/scripting, data governance, and quality. It is crucial to have the ability to engage with business stakeholders, tech teams, and data engineers to define requirements, align data strategies, and deliver high-value solutions. Proven experience leading cross-functional teams to execute complex data architectures is also required. Some additional skills that would be highly valued include advanced cloud services familiarity, data orchestration and automation, performance tuning and optimization, and data visualization. You may be assessed on key critical skills relevant for success in this role, such as risk and controls, change and transformation, business acumen, strategic thinking, and digital and technology, in addition to job-specific technical skills. The purpose of this role is to design, develop, and implement solutions to complex business problems, collaborating with stakeholders to understand their needs and requirements, and design and implement solutions that meet those needs while creating solutions that balance technology risks against business delivery, driving consistency. Your accountabilities will include designing and developing solutions as products that can evolve to meet business requirements aligned with modern software engineering practices and automated delivery tooling. You will need to apply targeted design activities that maximise the benefit of cloud capabilities and adopt standardised solutions where they fit, feeding into their ongoing evolution where appropriate. Additionally, you will provide fault finding and performance issue support to operational support teams, among other responsibilities. As a Vice President, you are expected to contribute or set strategy, drive requirements, and make recommendations for change. If you have leadership responsibilities, you should demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. For individual contributors, you are expected to be a subject matter expert within your own discipline and guide technical direction, leading collaborative multi-year assignments and coaching less experienced specialists. Overall, you are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, along with the Barclays Mindset of Empower, Challenge, and Drive in your day-to-day work.,

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies