Jobs
Interviews

8340 Hadoop Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary At PwC, our people in audit and assurance focus on providing independent and objective assessments of financial statements, internal controls, and other assurable information enhancing the credibility and reliability of this information with a variety of stakeholders. They evaluate compliance with regulations including assessing governance and risk management processes and related controls. Those in internal audit at PwC help build, optimise and deliver end-to-end internal audit services to clients in all industries. This includes IA function setup and transformation, co-sourcing, outsourcing and managed services, using AI and other risk technology and delivery models. IA capabilities are combined with other industry and technical expertise, in areas like cyber, forensics and compliance, to address the full spectrum of risks. This helps organisations to harness the power of IA to help the organisation protect value and navigate disruption, and obtain confidence to take risks to power growth. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary We are looking for a skilled Azure Data Engineer to join our Data Analytics (DA) team. The ideal candidate will have a strong understanding of Azure technologies and components, along with the ability to architect web applications on the Azure framework. As part of the team, you will be responsible for end-to-end implementation projects utilizing GenAI-based models and frameworks, contributing to our innovative data-driven solutions. Responsibilities: Architecture & Design: Design and architect web applications on the Azure platform, ensuring scalability, reliability, and performance. End-to-End Implementation: Lead the implementation of data solutions from ingestion to visualization, leveraging GenAI-based models and frameworks to drive analytics initiatives. Development & Deployment: Write clean, maintainable code in Python, Pyspark and deploy applications and services on Azure using best practices. Data Engineering: Build robust data pipelines and workflows to automate data processing and ensure seamless integration across various data sources. Collaboration: Work closely with cross-functional teams, including data scientists, product managers, and business analysts, to understand data requirements and develop effective solutions. Optimization: Optimize data processes and pipelines to improve performance and reduce costs, utilizing services within the Azure ecosystem. Documentation & Reporting: Document architecture, development processes, and technical specifications; provide regular updates to stakeholders. Technical Skills And Requirements: Azure Expertise: Strong knowledge of Azure components such as Azure Data Lake, Azure Databricks, Azure SQL Database, Azure Storage, and Azure Functions, among others. Programming Languages: Proficient in Python and Pyspark for data processing, scripting, and integration tasks. Big Data Technologies: Familiarity with big data tools and frameworks, especially Hadoop, and experience with data engineering concepts. Databricks: Experience using Azure Databricks for building scalable and efficient data pipelines. Database Management: Strong SQL skills for data querying, manipulation, and management. Data Visualization (if necessary): Basic knowledge of Power BI or similar tools for creating interactive reports and dashboards. Cloud Understanding: Familiarity with AWS is a plus, enabling cross-platform integration or migration tasks. Mandatory Skill Sets: As above Preferred Skill Sets: As above Years Of Experience: 3 to 8 years of professional experience in data engineering, with a focus on Azure-based solutions and web application architecture Education Qualification: Bachelor’s degree (B.Tech) or Master’s degree (M.Tech, MCA) in Economics, Computer Science, Information Technology, Mathematics, or Statistics. A background in the Finance domain is preferred. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Communication, Compliance Auditing, Corporate Governance, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Emotional Regulation, Empathy, Financial Accounting, Financial Audit, Financial Reporting, Financial Statement Analysis, Generally Accepted Accounting Principles (GAAP) {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 6 days ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust end-to-end solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology: Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory Skill Sets AWS Data Engineer Preferred Skill Sets AWS Data Engineer Years Of Experience Required 4-8 Education Qualification B.tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills AWS Development, Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 6 days ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Utilizing expertise in Power Apps, Power Pages, Power Automate, and Power Virtual Agent development. Designing and creating custom business apps, such as Canvas Apps, SharePoint Form Apps, Model Driven Apps, and Portals/Power Pages Portal. Implementing various Power Automate Flows, including Automated, Instant, Business Process Flow, and UI Flows. Collaborating with backend teams to integrate Power Platform solutions with SQL Server and SPO. Demonstrating strong knowledge of Dataverse, including security and permission levels. Developing and utilizing custom connectors in Power Platform solutions. Creating and consuming functions/API's to retrieve/update data from the database. Managing managed solutions to ensure seamless deployment and version control. Experience in Azure DevOps CI/CD deployment Pipelines. Monitoring and troubleshooting any performance bottlenecks. Having any coding/programming experience is a plus. Excellent communication skills. Requirements 6-9 years of relevant experience. Strong hands-on work experience with Power Pages and Model Driven Apps with Dataverse. Experience in Azure DevOps CI/CD deployment Pipelines. Good communication skills. Mandatory Skill Sets Strong hands-on work experience with Power Pages and Model Driven Apps with Dataverse. Preferred Skill Sets Experience in Azure DevOps CI/CD deployment Pipelines. Years Of Experience Required 5 years to 9 years Education Qualification Bachelor's degree in Computer Science, Engineering, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Power Apps Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 6 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Accellor is looking for a Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required. Design, develop, and maintain ETL pipelines using PySpark Notebooks and Microsoft Fabric Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions Migrate and integrate data from legacy SQL Server environments into modern data platforms Optimize data pipelines and workflows for scalability, efficiency, and reliability Provide technical leadership and mentorship to junior developers and other team members Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability Develop, maintain, and enforce data engineering best practices, coding standards, and documentation Conduct code reviews and provide constructive feedback to improve team productivity and code quality Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms Requirements Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field Experience with Microsoft Fabric or similar cloud-based data integration platforms is a must Min 3 years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling Strong knowledge of data warehousing concepts, ETL frameworks, and big data processing Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage Experience working with both structured and unstructured data sources Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues Proven ability to work independently, as part of a team, and in leadership roles Strong communication skills with the ability to translate complex technical concepts into business terms Mandatory Skills Experience with Data lake, Data warehouse, Delta lake Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools Knowledge of scripting languages (e.g., Python, Scala) for data manipulation and automation Familiarity with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global canters. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Supervisor role is an intermediate management position where you will lead and direct a team of employees to establish and implement new or revised application systems and programs in coordination with the Technology team. Your main objective will be to oversee applications systems analysis and programming activities. Your responsibilities will include managing an Applications Development team, recommending new work procedures for process efficiencies, resolving issues by identifying solutions based on technical experience, developing comprehensive knowledge of how your area integrates within apps development, ensuring quality of tasks provided by the team, acting as a backup to Applications Development Manager, and serving as an advisor to junior developers and analysts. You will also need to appropriately assess risk in business decisions, safeguarding Citigroup's reputation and assets by driving compliance with laws and regulations, adhering to policy, applying ethical judgment, and effectively supervising the activity of others. To qualify for this role, you should have 2-4 years of relevant experience, proficiency in Big Data, Spark, Hive, Hadoop, Python, Java, experience in managing and implementing successful projects, ability to make technical decisions on software development projects, knowledge of dependency management, change management, continuous integration testing tools, audit/compliance requirements, software engineering, and object-oriented design. Demonstrated leadership, management skills, and clear communication are essential. A Bachelors degree or equivalent experience is required for this position. Please note that this job description provides an overview of the work performed, and other job-related duties may be assigned as necessary. If you require a reasonable accommodation due to a disability to use our search tools or apply for a career opportunity, please review Accessibility at Citi. You can also view Citis EEO Policy Statement and the Know Your Rights poster.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining our team as a Senior Data Scientist with expertise in Artificial Intelligence (AI) and Machine Learning (ML). The ideal candidate should possess a minimum of 5-7 years of experience in data science, focusing on AI/ML applications. You are expected to have a strong background in various ML algorithms, programming languages such as Python, R, or Scala, and data processing frameworks like Apache Spark. Proficiency in data visualization tools and experience in model deployment using Docker, Kubernetes, and cloud services will be essential for this role. Your responsibilities will include end-to-end AI/ML project delivery, from data processing to model deployment. You should have a good understanding of statistics, probability, and mathematical concepts used in AI/ML. Additionally, familiarity with big data tools, natural language processing techniques, time-series analysis, and MLOps will be advantageous. As a Senior Data Scientist, you are expected to lead cross-functional project teams and manage data science projects in a production setting. Your problem-solving skills, communication skills, and curiosity to stay updated with the latest advancements in AI and ML are crucial for success in this role. You should be able to convey technical insights clearly to diverse audiences and quickly adapt to new technologies. If you are an innovative, analytical, and collaborative team player with a proven track record in AI/ML project delivery, we invite you to apply for this exciting opportunity.,

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

You should have at least 2 years of professional work experience in implementing data pipelines using Databricks and datalake. A minimum of 3 years of hands-on programming experience in Python within a cloud environment (preferably AWS) is necessary for this role. Having 2 years of professional work experience with real-time streaming systems such as Event Grid and Event topics would be highly advantageous. You must possess expert-level knowledge of SQL to write complex, highly-optimized queries for processing large volumes of data effectively. Experience in developing conceptual, logical, and/or physical database designs using tools like ErWin, Visio, or Enterprise Architect is expected. A minimum of 2 years of hands-on experience working with databases like Snowflake, Redshift, Synapse, Oracle, SQL Server, Teradata, Netezza, Hadoop, MongoDB, or Cassandra is required. Knowledge or experience in architectural best practices for building data lakes is a must for this position. Strong problem-solving and troubleshooting skills are necessary, along with the ability to make sound judgments independently. You should be capable of working independently and providing guidance to junior data engineers. If you meet the above requirements and are ready to take on this challenging role, we look forward to your application. Warm Regards, Rinka Bose Talent Acquisition Executive Nivasoft India Pvt. Ltd. Mobile: +91-9632249758 (INDIA) | 732-334-3491 (U.S.A) Email: rinka.bose@nivasoft.com | Web: https://nivasoft.com/,

Posted 6 days ago

Apply

9.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Challenge: Search, Discovery, and Content AI (SDC) is a cornerstone of Adobe’s ecosystem, enabling creative professionals and everyday users to access, discover, and demonstrate a wide array of digital assets and creative content, including images, videos, documents, vector graphics and more. With increasing demand for intuitive search, contextual discovery, and seamless content interactions across Adobe products like Express, Lightroom, and Adobe Stock, SDC is evolving into a generative AI powerhouse. This team develops innovative solutions for intent understanding, personalized recommendations, and action orchestration to transform how users interact with content. Working with extensive datasets and pioneering technologies, you will help redefine the discovery experience and drive user success. If working at the intersection of machine learning, generative AI, and real-time systems excites you, we’d love to have you join us. Responsibilities: As a Machine Learning Engineer on the SDC team, you will develop and optimize machine learning models and algorithms for search, recommendation, and content understanding across diverse content types. You will build and deploy scalable generative AI solutions to enable intelligent content discovery and contextual recommendations within Adobe products. Collaborating with multi-functional teams, you will integrate ML models into production systems, ensuring high performance, reliability, and user impact. Your role will involve researching, designing, and implementing pioneering techniques in natural language understanding, computer vision, and multimodal learning for content and asset discovery. You will also contribute to the end-to-end ML pipeline, including data preprocessing, model training, evaluation, deployment, and monitoring, while pushing the boundaries of computational efficiency to meet the needs of real-time, large-scale applications. Partnering with product teams, identify customer needs and translate them into innovative solutions that prioritize usability and performance. Additionally, you will mentor and provide technical guidance to junior engineers and multi-functional collaborators, driving excellence and innovation within the team. What You’ll Need to Succeed: Bachelor’s or equivalent experience, advanced degree such as a Ph.D. in Computer Science, Machine Learning, Data Science, or a related field. 6–9 years of industry experience building and deploying machine learning systems at scale. Proficiency in programming languages such as Python (for ML/AI) and Java (for production-grade systems). Strong expertise in machine learning frameworks and tools (e.g., TensorFlow, PyTorch). Solid understanding of mathematics and ML fundamentals: linear algebra, statistics, optimization, and numerical methods. Experience with deep learning techniques for computer vision (e.g., CNNs, transformers), natural language understanding (e.g., BERT, GPT), or multimodal AI. Consistent track record of delivering ML solutions to production environments, optimizing performance, and ensuring reliability. Knowledge of large-scale distributed systems and frameworks (e.g., Kubernetes, Spark, Hadoop). Strong problem-solving skills and the ability to innovate new solutions for complex challenges. Excellent communication and collaboration skills to work efficiently in a fast-paced, multi-functional environment. Nice-to-Haves: Experience with generative AI (e.g., Stable Diffusion, DALL·E, MidJourney) and its application in content creation or discovery. Knowledge of computational geometry, 3D modeling, or animation pipelines. Familiarity with real-time recommendation systems or search indexing. Publications in peer-reviewed journals or conferences in relevant fields. Why Join Us? Be part of a dynamic, innovative team shaping the future of AI-powered creative tools. Work on impactful, large-scale projects that touch millions of users daily. Enjoy the benefits and resources of a leading global company with the agility of a startup environment. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 6 days ago

Apply

5.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The challenge Search, Discovery, and Content AI (SDC) is a cornerstone of Adobe’s ecosystem, enabling creative professionals and everyday users to access, discover, and demonstrate a wide array of digital assets and creative content, including images, videos, documents, vector graphics and more. With increasing demand for intuitive search, contextual discovery, and seamless content interactions across Adobe products like Express, Lightroom, Adobe Stock, SDC is evolving into a generative AI powerhouse. This team develops innovative solutions for intent understanding, personalized recommendations, and action orchestration to transform how users interact with content. Working with extensive datasets and pioneering technologies, you will help redefine the discovery experience and drive user success. The Opportunity How can you participate? We’re looking for a top-notch search engineering leadership in the area of information retrieval, search indexing, Elasticsearch, Lucene, algorithms, relevance & ranking, data mining, machine learning, data analysis & metrics, query processing, multi-lingual search, and search UX. This is an opportunity to make a huge impact in a fast-paced, startup-like environment in a great company. Join us! Responsibilities Work on Big data, data ingestion, search indexing, Hadoop, distributed systems, deep learning, recommendations, and performance by developing a Search platform at Adobe that would power Adobe product lines such as Express, Creative Cloud, Acrobat, Marketing Cloud, Stock. Apply machine learning to improve the ranking and recommendations as part of search work-flow. Build platform to index billions of images, documents and other assets in real-time. Maintain and optimize search engine, identify new ideas to evolve it, develop new features and benchmark possible solutions, in terms of search relevance, recommendations but also user experience, performance and feasibility. Build these products using technologies such as Elastic Search, REST web services, SQS/Kafka, Machine Learning, and more. What You Need To Succeed B. Tech or M. Tech in Computer Science Minimum 5-10 years of relevant experience in industry Experience in engineering SaaS based software development Hands on experience with Java and Python. Hands-on experience in Big data processing, Hadoop and Spark. Experience in Web services and REST. Experience in RDBMS & NOSQL database. Experience in AWS resources. Experience with Elastic Search/Solr. Experience with search engine technology, and inverted indexes Hands-on experience in building indexing pipelines Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Adobe is seeking dedicated Product Analytics Experts to join our growing team in Noida. In this role, you will play key part in driving the success of Adobe's Document Cloud products by using your expertise to understand user behavior, identify growth opportunities, and help drive data-driven decisions. Responsibilities: Analyze large datasets to identify trends, patterns, and key performance indicators . Develop and maintain SQL queries to extract, transform, and load data from various sources, including Hadoop and cloud-based platforms like Databricks. Develop compelling data visualizations using Power BI and Tableau to communicate insights seamlessly to PMs/ Engineering and leadership. Conduct A/B testing and campaign analysis, using statistical methods to measure and evaluate the impact of product changes. Partner with cross-functional teams (product managers, engineers, marketers) to translate data into actionable insights and drive strategic decision-making. Independently own and manage projects from inception to completion, ensuring timely delivery and high-quality results. Effectively communicate analytical findings to stakeholders at all levels, both verbally and in writing. Qualifications: 8-12 years of relevant experience in solving deep analytical challenges within a product or data-driven environment. Strong proficiency in advanced SQL, with experience working with large-scale datasets. Expertise in data visualization tools such as Power BI and Tableau. Hands-on experience in A/B testing, campaign analysis, and statistical methodologies. Working knowledge of scripting languages like Python or R, with a foundational understanding of machine learning concepts. Experience with Adobe Analytics is a significant plus. Good communication, presentation, and interpersonal skills. A collaborative mindset with the ability to work effectively within cross-functional teams. Strong analytical and problem-solving skills with a passion for data-driven decision making. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Solution Architect is responsible for driving the design and development of solutions that are highly scalable, stable and secure modern applications. Responsibilities : Design and develop modern application with modular, loosely coupled design and deployable on a cloud native architecture with desired SRE practices to seamlessly manage applications in production using observability, monitoring and automation etc. Develop a deep knowledge of business & collaborate with stakeholders in collating business and functional requirements and translate requirements into technical specifications as per defined architectures. Lead the preparation of detailed design specifications to form the basis for development, modification and enhancement of applications. Define integration patterns. Mapping interdependencies of business requirements to solution building blocks using architectural best practices and establish standards and robust processes. Comprehend and map interdependencies between application dependencies and infrastructure components. Drive design consistency across the Org and reduce repeated and duplication of work. Ensure adoption and implementation of defined solutions. Recommend designs that consider current applications & architecture, operating models as well as end target state architecture. Develop and document solution specifications that serve as the reference for implementation. Collaborates with different engineering teams to develop and agree on system integration standards. Requirements: Minimum 8-12 years’ hands on experience using modelling tools for process and end-to-end solution design. In-depth industry and academic knowledge with proven experience in process analysis and design Good proficiency in at least one major programming languages, Java, C#, Python. Good knowledge of web and mobile development standards, database technologies such as spring boot, nodes.js, MariaDB/MySQL, PostgreSQL, Hadoop ecosystem. Good understanding of DevOps, SRE, and Agile methodology and tools such as maven, jcube, Nexus, cucumber. Proficient in tools such as Git, Bitbucket, Jenkins, Artifactory, Nexus Strong understanding of Distributed systems Knowledge and experience on how data and application are integrated with business processes, policies and regulations Knowledge in risk and governance concepts Highly organized and structured thinking and ability to understand and synthesize unstructured information Analytical thinking and ability to understand and synthesize unstructured information Knowledge around business process modelling, Information Systems analysis and design, enterprise technology and data modelling Strong communication skills & ability to engage with geographically different team

Posted 6 days ago

Apply

3.0 - 10.0 years

0 - 0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have experience working with Hadoop and possess a minimum of 3-5 years in the field, with a salary rate of 90,000. Candidates with 5-7 years of experience can expect a salary rate of 1,30,000, while those with 7-10 years of experience can anticipate a rate of 1,50,000. The responsibilities for this role include tracking information such as serial number, date, agency name, skill set, candidate details (name, email, phone number), current company, experience level, notice period, and work location details for the project. About the company: Purview is a renowned Digital Cloud & Data Engineering company headquartered in Edinburgh, United Kingdom, with a global presence in 14 countries including India, Poland, Germany, Finland, Netherlands, Ireland, USA, UAE, Oman, Singapore, Hong Kong, Malaysia, and Australia. With a strong foothold in the UK, Europe, and APEC regions, Purview offers services to Captive Clients and top IT organizations in fully managed and co-managed capacity models. Company Information: Purview's office locations include: - 3rd Floor, Sonthalia Mind Space, Near Westin Hotel, Gafoor Nagar, Hitechcity, Hyderabad. Contact: +91 40 48549120 / +91 8790177967 - Gyleview House, 3 Redheughs Rigg, South Gyle, Edinburgh, EH12 9DQ, UK. Phone: +44 7590230910 For career inquiries, please reach out to: careers@purviewservices.com This role offers an exciting opportunity to work with a dynamic and innovative company with a global reach. If you meet the qualifications and are excited about the prospect of joining Purview, please login to apply for the position.,

Posted 1 week ago

Apply

500.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Role - Frontend Software Engineer I (React JS) Experience - 2+ yrs in Reactjs is mandatory Location - Mumbai About Us: We are surrounded by the world's leading consumer companies led by technology - Amazon for retail, Airbnb for hospitality, Uber for mobility, Netflix and Spotify for entertainment, etc. Food & Beverage is the only consumer sector where large players are still traditional restaurant companies. At Rebel Foods, we are challenging this status quo as we are building the world's most valuable restaurant company on the internet, superfast. The opportunity for us is immense due to the exponential growth in the food delivery business worldwide which has helped us build 'The World's Largest Internet Restaurant Company' in the last few years. Rebel Foods current presence in 4 countries (India, UAE, UK) with 45 + brands and 3500+ internet restaurants has been built on a simple system - The Rebel Operating Model. While for us it is still Day 1, we know we are in the middle of a revolution towards creating never seen before customer-first experiences. We bring you a once-in-a-lifetime opportunity to disrupt the 500-year old industry with technology at its core. We urge you to refer to the below to understand how we are changing the restaurant industry before applying at Rebel Foods. https://jaydeep-barman.medium.com/why-is-rebel-foods-hiring-super-talented-engineers b88586223ebe https://jaydeep-barman.medium.com/how-to-build-1000-restaurants-in-24-months-the-rebel-method cb5b0cea4dc8 https://medium.com/faasos-story/winning-the-last-frontier-for-consumer-internet-5f2a659c43db https://medium.com/faasos-story/a-unique-take-on-food-tech-dcef8c51ba41 Frontend Software Engineer I @ REBEL FOODS Technology is the backbone and the biggest differentiator of any consumer-centric internet business. Most of the high growth consumer-based internet companies (e.g., Amazon - Retail, Netflix - Entertainment/Media, Uber -Mobility etc.) have constantly been disrupting their respective industries by powering their end-to-end business processes and products by solving business and customer problems with the help of new-age and scalable technologies. The restaurant industry still remains the major consumer-centric industry where online penetration and automations are minuscule. Our goal is simple – to change this incumbent mode of business in the food space. Today, with 3500+ virtual / delivery only internet restaurants in 40 cities across India, Indonesia, UAE and UK with 15+ brands (Faasos, Behrouz Biryani, Oven Story pizza etc.), Rebel Foods is the world’s largest and fastest growing internet restaurant company. Online Food Services (FoodTech) could be broken into Food Discovery (commerce), Preparation(manufacturing/SCM) & Delivery (logistics) platforms and each of these vertical/platform has many use cases which need to be solved at scale. Many companies (Swiggy, Zomato, DoorDash, Delivery Hero, Gojek etc.) are solving the use cases to some extent in Food- Discovery and Food-Delivery Tech. Food-Preparation Tech remains old world solutions. While we collaborate with many of these companies across the world, we solve the customer problems in food space of all these verticals in an integrated fashion. As a software engineer, you will have interesting opportunities in building/architecting/re-architecting different backend and frontend systems. You will get a chance to work on different open-source, cloud, mobile etc. tech stacks depending on your strengths. Software Engineering @Rebel: Software Engineering @ Rebel comprises of most of the components of a customer-centric internet commerce company and problem statements of scalable and distributed systems. We believe in applying engineering excellence and operational excellence in all the areas of compute, storage, network to build & operate efficient systems. While we work on open-source, cloud-managed and enterprise tech stacks in Frontend (web/mobile), Backend (API layers), Caching, Async Processing/Queuing, Databases (SQL, NoSQL), ERP, CRM, Analytics, Big Data (Hadoop, Spark, MongoDB etc.), Data Science (ML) etc., we also work/evaluate on many emerging tech stacks like NLP, AI/IOT, Robotics + Automations, Bots, Voice, Vision computing, Blockchain etc. to solve many use cases around different verticals of FoodTech. The teams have built more than 30 different systems in-house to tackle the massive complexity of a multi-brand operation while keeping business metrics supremely efficient and optimized. Future of Software Engineering @Rebel: Rebel Engineering function is working on Software + Automation to solve the toughest problems in an integrated fashion for our customers and to make their food experiences unique, memorable and delightful & sure. We believe in continuous adoption of emerging technologies to solve customer problems in fast and innovative fashion. Technology and data are the backbone for us to disrupt this industry and build the most loved experiences for customers. The Role: We are in the lookout for someone who is passionate about technology to solve known/unknown business & customer use cases. In this role, you will be responsible to write efficient code & unit tests, review code, evaluate technologies, do POCs etc. You must be highly proficient in frontend programming and technologies - React JS . We expect you to be excellent in writing efficient programs and problem-solving skills. You must also possess good knowledge on data structure and algorithms and computer science fundamentals. Exposure to common technologies like web technologies, caching, queuing, databases (SQL), Bigdata, Storage systems, monitoring tools, cloud technologies etc. is also helpful. You will work closely with the Product and Engineering teams and will report to Engineering Manager. The Rebel Culture: We believe in empowering and growing people to perform the best at their job functions. We follow Outcome-oriented, fail-fast iterative & collaborative culture to move fast in building tech solutions. Rebel is not a usual workplace. The following slides will give you a sense of our culture, how Rebel conducts itself and who will be the best fit for our company. We suggest you go through it before making up your mind. https://drive.google.com/file/d/1f8BsKluXEu_Ey04iFoFtG1ktoBs1-YRD/view

Posted 1 week ago

Apply

2.0 - 9.0 years

0 Lacs

karnataka

On-site

We are seeking a Data Architect / Sr. Data and Pr. Data Architects to join our team. In this role, you will be involved in a combination of hands-on contribution, customer engagement, and technical team management. As a Data Architect, your responsibilities will include designing, architecting, deploying, and maintaining solutions on the MS Azure platform using various Cloud & Big Data Technologies. You will be managing the full life-cycle of Data Lake / Big Data solutions, starting from requirement gathering and analysis to platform selection, architecture design, and deployment. It will be your responsibility to implement scalable solutions on the Cloud and collaborate with a team of business domain experts, data scientists, and application developers to develop Big Data solutions. Moreover, you will be expected to explore and learn new technologies for creative problem solving and mentor a team of Data Engineers. The ideal candidate should possess strong hands-on experience in implementing Data Lake with technologies such as Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Additionally, experience with big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase, MongoDB, Neo4J, Elastic Search, Impala, Sqoop, etc., is required. Proficiency in programming and debugging skills in Python and Scala/Java is essential, with experience in building REST services considered beneficial. Candidates should also have experience in supporting BI and Data Science teams in consuming data in a secure and governed manner, along with a good understanding of using CI/CD with Git, Jenkins / Azure DevOps. Experience in setting up cloud-computing infrastructure solutions, hands-on experience/exposure to NoSQL Databases, and Data Modelling in Hive are all highly valued. Applicants should have a minimum of 9 years of technical experience, with at least 5 years on MS Azure and 2 years on Hadoop (CDH/HDP).,

Posted 1 week ago

Apply

0.0 - 6.0 years

15 - 18 Lacs

Indore, Madhya Pradesh

On-site

Location: Indore Experience: 6+ Years Work Type : Hybrid Notice Period : 0-30 Days joiners We are hiring for a Digital Transformation Consulting firm that specializes in the Advisory and implementation of AI, Automation, and Analytics strategies for the Healthcare providers. The company is headquartered in NJ, USA and its India office is in Indore, MP. Job Description: We are seeking a highly skilled Tech Lead with expertise in database management, data warehousing, and ETL pipelines to drive the data initiatives in the company. The ideal candidate will lead a team of developers, architects, and data engineers to design, develop, and optimize data solutions. This role requires hands-on experience in database technologies, data modeling, ETL processes, and cloud-based data platforms. Key Responsibilities: Lead the design, development, and maintenance of scalable database, data warehouse, and ETL solutions. Define best practices for data architecture, modeling, and governance. Oversee data integration, transformation, and migration strategies. Ensure high availability, performance tuning, and optimization of databases and ETL pipelines. Implement data security, compliance, and backup strategies. Required Skills & Qualifications: 6+ years of experience in database and data engineering roles. Strong expertise in SQL, NoSQL, and relational database management systems (RDBMS). Hands-on experience with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery). Deep understanding of ETL tools and frameworks (e.g., Apache Airflow, Talend, Informatica). Experience with cloud data platforms (AWS, Azure, GCP). Proficiency in programming/scripting languages (Python, SQL, Shell scripting). Strong problem-solving, leadership, and communication skills. Preferred Skills (Good to Have): Experience with big data technologies (Hadoop, Spark, Kafka). Knowledge of real-time data processing. Exposure to AI/ML technologies and working with ML algorithms Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹1,800,000.00 per year Schedule: Day shift Application Question(s): We must fill this position urgently. Can you start immediately? Have you held a lead role in the past? Experience: Extract, Transform, Load (ETL): 6 years (Required) Python: 5 years (Required) big data technologies (Hadoop, Spark, Kafka): 6 years (Required) Snowflake: 6 years (Required) Data warehouse: 6 years (Required) Location: Indore, Madhya Pradesh (Required) Work Location: In person

Posted 1 week ago

Apply

1.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

The ideal candidate's favorite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the right answers. Responsibilities Analyze raw data: assessing quality, cleansing, structuring for downstream processing Design accurate and scalable prediction algorithms Collaborate with engineering team to bring analytical prototypes to production Generate actionable insights for business improvements Qualifications Bachelor's degree or equivalent experience in quantative field (Statistics, Mathematics, Computer Science, Engineering, etc.) At least 1 - 2 years' of experience in quantitative analytics or data modeling Deep understanding of predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau)

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As an Automation QA Engineer (Python & Scala) at our global IT leader client based in Hyderabad, you will be responsible for designing and developing automation testing frameworks using web and object-oriented technologies. With a focus on ensuring product quality, you will conduct functional and automation QA processes. Your expertise in Python or Java programming will be crucial in leveraging strong object-oriented programming skills to achieve project goals. Collaboration with team members to meet specific timelines is essential for success in this role. Ideally, you should possess 4 to 6 years of experience, with at least 2 years in automation or development being preferred. The offered salary for this position is 30 LPA, and the notice period required is immediate to 15 days. The option for relocation is available for interested candidates. Candidates with exposure to Big Data processing systems, Hadoop, or Scala will have an added advantage. Strong analytical and testing skills are essential for this role to excel in the fast-paced environment of our client's IT consulting, business process management, and digital transformation services.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

bhubaneswar

On-site

At Rhythm, our values serve as the cornerstone of our organization. We are deeply committed to customer success, fostering innovation, and nurturing our employees. These values shape our decisions, actions, and interactions, ensuring that we consistently create a positive impact on the world around us. Rhythm Innovations is currently looking for a skilled and enthusiastic Machine Learning (ML) Developer to conceptualize, create, and implement machine learning models that enhance our supply chain risk management and other cutting-edge solutions. As an ML Developer, you will collaborate closely with our AI Architect and diverse teams to construct intelligent systems that tackle intricate business challenges and further our goal of providing unparalleled customer satisfaction. Key Responsibilities Model Development: Devise, execute, and train machine learning models utilizing cutting-edge algorithms and frameworks like TensorFlow, PyTorch, and scikit-learn. Data Preparation: Process, refine, and convert extensive datasets for the training and assessment of ML models. Feature Engineering: Identify and engineer pertinent features to enhance model performance and precision. Algorithm Optimization: Explore and implement advanced algorithms to cater to specific use cases such as classification, regression, clustering, and anomaly detection. Integration: Coordinate with software developers to integrate ML models into operational systems and guarantee smooth functionality. Performance Evaluation: Assess model performance using suitable metrics and consistently refine for accuracy, efficacy, and scalability. MLOps: Aid in establishing and overseeing CI/CD pipelines for model deployment and monitoring in production environments. Research and Development: Stay abreast of the latest breakthroughs in Gen AI AI/ML technologies and propose inventive solutions. Collaboration: Engage closely with data engineers, product teams, and stakeholders to grasp requirements and deliver customized ML solutions. Requirements Educational Background: Bachelor's in Engineering in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience: 3 to 6 years of practical experience in developing and deploying machine learning models. Technical Skills Proficiency in Python and ML libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Experience with data manipulation tools like Pandas, NumPy, and visualization libraries such as Matplotlib or Seaborn. Familiarity with big data frameworks (Hadoop, Spark) is advantageous. Knowledge of SQL/NoSQL databases and data pipeline tools (e.g., Apache Airflow). Hands-on experience with cloud platforms (AWS, Azure, Google Cloud) and their Gen AI AI/ML services. Thorough understanding of supervised and unsupervised learning, deep learning, and reinforcement learning. Exposure to MLOps practices and model deployment pipelines. Soft Skills Strong problem-solving and analytical abilities. Effective communication and teamwork skills. Capability to thrive in a dynamic, collaborative environment.,

Posted 1 week ago

Apply

8.0 - 14.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As a Data Modeller at ReBIT, you will be responsible for technology delivery by collaborating with business stakeholders, RBI departments, and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models, data migration, and generate business reports. You will play a crucial role in identifying the architecture, infrastructure, interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. The ideal candidate should possess 8-14 years of experience in the IT industry with hands-on experience in relational, dimensional, and/or analytic data using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols. Experience in data technologies such as SQL, Pl/SQL, Oracle Exadata, MongoDB, Cassandra, and Hadoop is required. Additionally, expertise in designing enterprise-grade application data models/structures, particularly in the BFSI domain, is essential. You should have a good understanding of metadata management, data modeling, and related tools like Oracle SQL Developer Data Modeler, Erwin, or ER Studio. Your role will involve working on modeling, design, configuration, installation, and performance tuning to ensure the successful delivery of applications in the BFSI domain. Furthermore, you will be responsible for building best-in-class performance-optimized relational/non-relational database structures/models and creating ER diagrams, data flow diagrams, and dimensional diagrams for relational systems/data warehouse. In this role, you will need to work proactively and independently to address project requirements and effectively communicate issues/challenges to reduce project delivery risks. You will be a key player in driving the data modeling process, adhering to design standards, tools, best practices, and related development for enterprise data models. If you are a data modeling professional with a passion for delivering innovative solutions in a collaborative environment, this role at ReBIT in Navi Mumbai offers an exciting opportunity to contribute to the BFSI domain while honing your skills in data modeling and technology delivery.,

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Pune, Maharashtra

Remote

R022242 Pune, Maharashtra, India Engineering Regular Location Details: Pune, India This is a hybrid position. You’ll divide your time between working remotely from your home and an office, so you should live within commuting distance. Hybrid teams may work in-office as much as a few times a week or as little as once a month or quarter, as decided by leadership. The hiring manager can share more about what hybrid work might look like for this team Join our Team Are you excited about building world-class software solutions that empower millions of customers globally? At GoDaddy, our engineers are at the forefront of developing innovative platforms that drive our core domain businesses. We are seeking a skilled Senior Machine Learning Scientist to join our Domain Search team, where you will design, build, and maintain the foundational systems and services powering GoDaddy’s search, ML, and GenAI platforms. In this role, you will develop and apply machine learning and LLM-based methods to improve our customers’ search experience and play a major part in improving the search page across all markets we serve. Whether you’re passionate about crafting highly scalable systems or developing seamless customer experiences, your work will be critical to ensuring performance, scalability, and reliability for our customers worldwide. Join us and help craft the future of software at GoDaddy! What you'll get to do... Work with the latest deep learning and search technologies to develop and optimize advanced machine learning models to improve our customers’ experience Be self-driven, understand the data we have, and provide data-driven insights to all of our challenges Mine datasets to develop features and models to improve search relevance and ranking algorithms Design and analyze experiments to test new product ideas Understand patterns and insights about what our users search for and purchase to help personalize our recommendations Your experience should include... 5 years of industry experience in deep learning and software development Skilled in machine learning, statistics, and natural language processing (NLP) Proficient with deep learning frameworks such as PyTorch and handling large datasets Experienced in programming languages like Python, Java, or similar Familiar with large-scale data analytics using Spark You might also have... Ph.D. in a related field preferred Experience with Amazon AWS, containerized solutions, and both SQL and NoSQL databases Strong understanding of software security standard processes Experience with Hadoop technologies such as Spark, Hive, and other big data tools; data analytics and machine learning experience are a plus Experience with Elastic Search and search technologies is a plus, with a passion for developing innovative solutions for real-world business problems We've got your back... We offer a range of total rewards that may include paid time off, retirement savings (e.g., 401k, pension schemes), bonus/incentive eligibility, equity grants, participation in our employee stock purchase plan, competitive health benefits, and other family-friendly benefits including parental leave. GoDaddy’s benefits vary based on individual role and location and can be reviewed in more detail during the interview process We also embrace our diverse culture and offer a range of Employee Resource Groups (Culture). Have a side hustle? No problem. We love entrepreneurs! Most importantly, come as you are and make your own way About us... GoDaddy is empowering everyday entrepreneurs around the world by providing the help and tools to succeed online, making opportunity more inclusive for all. GoDaddy is the place people come to name their idea, build a professional website, attract customers, sell their products and services, and manage their work. Our mission is to give our customers the tools, insights, and people to transform their ideas and personal initiative into success. To learn more about the company, visit About Us At GoDaddy, we know diverse teams build better products—period. Our people and culture reflect and celebrate that sense of diversity and inclusion in ideas, experiences and perspectives. But we also know that’s not enough to build true equity and belonging in our communities. That’s why we prioritize integrating diversity, equity, inclusion and belonging principles into the core of how we work every day—focusing not only on our employee experience, but also our customer experience and operations. It’s the best way to serve our mission of empowering entrepreneurs everywhere, and making opportunity more inclusive for all. To read more about these commitments, as well as our representation and pay equity data, check out our Diversity and Pay Parity annual report which can be found on our Diversity Careers page GoDaddy is proud to be an equal opportunity employer . GoDaddy will consider for employment qualified applicants with criminal histories in a manner consistent with local and federal requirements. Refer to our full EEO policy Our recruiting team is available to assist you in completing your application. If they could be helpful, please reach out to myrecruiter@godaddy.com GoDaddy doesn’t accept unsolicited resumes from recruiters or employment agencies

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra

Remote

R022243 Pune, Maharashtra, India Engineering Regular Location Details: Pune, India This is a hybrid position. You’ll divide your time between working remotely from your home and an office, so you should live within commuting distance. Hybrid teams may work in-office as much as a few times a week or as little as once a month or quarter, as decided by leadership. The hiring manager can share more about what hybrid work might look like for this team Join our Team Are you excited about building world-class software solutions that power millions of customers globally? At GoDaddy, our engineers are at the forefront of crafting innovative platforms that drive our core domain businesses, and we’re looking for dedicated professionals to help us craft the future of software. Whether you’re passionate about developing highly scalable systems, seamless customer experiences, or advanced machine learning and LLM-based methods to improve the search experience, we have a place for you! As part of our Domain Search, Registrars, and Investors teams, you’ll work on impactful products like our domain name search engine, registration and management services, high-scale DNS, investor experience, and personalization through ML models. You’ll play a key role in improving the search page for customers worldwide, owning the design, code, and data quality of your products end-to-end. We value strong software engineers with experience in microservices, cloud computing, distributed systems, data processing, and customer focus—and we’re flexible regarding your technology background. Join a small, high-impact team of dedicated engineers as we build and iterate upon the world’s largest domain name registrar services and secondary marketplace What you'll get to do... Develop and maintain scalable, cloud-ready applications and APIs, contributing across the full technology stack, including persistence and service layers Leverage data analytics and ETL processes to transform, enrich, and improve product and customer experience in both batch and streaming scenarios Ensure high code quality through unit/integration testing, code reviews, and consistency with standard methodologies Lead technical projects through architecture, design, and implementation phases, solving end-to-end problems Collaborate effectively with distributed teams Your experience should include... 2+ years of industrial experience with a strong background in deep learning and software development Skilled in machine learning, statistics, and natural language processing (NLP) Hands-on experience with deep learning frameworks such as PyTorch and working with large datasets Proficient in programming languages such as Python or Java Familiar with large-scale data analytics using Spark You might also have... Experience with AWS and containerized solutions Proficient in both SQL and NoSQL databases Strong understanding of software security standard processes Experience with Hadoop technologies (e.g., Spark, Hive) and big data analytics; ML and search technologies (e.g., Elastic Search) are a plus We've got your back... We offer a range of total rewards that may include paid time off, retirement savings (e.g., 401k, pension schemes), bonus/incentive eligibility, equity grants, participation in our employee stock purchase plan, competitive health benefits, and other family-friendly benefits including parental leave. GoDaddy’s benefits vary based on individual role and location and can be reviewed in more detail during the interview process We also embrace our diverse culture and offer a range of Employee Resource Groups (Culture). Have a side hustle? No problem. We love entrepreneurs! Most importantly, come as you are and make your own way About us... GoDaddy is empowering everyday entrepreneurs around the world by providing the help and tools to succeed online, making opportunity more inclusive for all. GoDaddy is the place people come to name their idea, build a professional website, attract customers, sell their products and services, and manage their work. Our mission is to give our customers the tools, insights, and people to transform their ideas and personal initiative into success. To learn more about the company, visit About Us At GoDaddy, we know diverse teams build better products—period. Our people and culture reflect and celebrate that sense of diversity and inclusion in ideas, experiences and perspectives. But we also know that’s not enough to build true equity and belonging in our communities. That’s why we prioritize integrating diversity, equity, inclusion and belonging principles into the core of how we work every day—focusing not only on our employee experience, but also our customer experience and operations. It’s the best way to serve our mission of empowering entrepreneurs everywhere, and making opportunity more inclusive for all. To read more about these commitments, as well as our representation and pay equity data, check out our Diversity and Pay Parity annual report which can be found on our Diversity Careers page GoDaddy is proud to be an equal opportunity employer . GoDaddy will consider for employment qualified applicants with criminal histories in a manner consistent with local and federal requirements. Refer to our full EEO policy Our recruiting team is available to assist you in completing your application. If they could be helpful, please reach out to myrecruiter@godaddy.com GoDaddy doesn’t accept unsolicited resumes from recruiters or employment agencies

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are a highly skilled and experienced Senior Engineer in Data Science who will be responsible for designing and implementing next-generation data science solutions. Your role will involve shaping the data strategy and driving innovation through advanced analytics and machine learning. In this position, your responsibilities will include providing technical leadership and designing end-to-end data science solutions. This encompasses data acquisition, ingestion, processing, storage, modeling, and deployment. You will also be tasked with developing and maintaining scalable data pipelines and architectures using cloud-based platforms and big data technologies to handle large volumes of data efficiently. Collaboration with stakeholders to define business requirements and translate them into technical specifications is essential. As a Senior Engineer in Data Science, you will select and implement appropriate machine learning algorithms and techniques, staying updated on the latest advancements in AI/ML to solve complex business problems. Building and deploying machine learning models, monitoring and evaluating model performance, and providing technical leadership and mentorship to junior data scientists are also key aspects of this role. Furthermore, you will contribute to the development of data science best practices and standards. To qualify for this position, you should hold a B.Tech/M.Tech/M.Sc (Mathematics/Statistics)/PhD from India or abroad. You are expected to have at least 4+ years of experience in data science and machine learning, with a total of around 7+ years of overall experience. A proven track record of technical leadership and implementing complex data science solutions is required, along with a strong understanding of data warehousing, data modeling, and ETL processes. Expertise in machine learning algorithms and techniques, time series analysis, programming proficiency in Python, knowledge of general data science tools, domain knowledge in Industrial, Manufacturing, and/or Healthcare, proficiency in cloud-based platforms and big data technologies, and excellent communication and collaboration skills are all essential qualifications for this role. Additionally, contributions to open-source projects or publications in relevant fields will be considered an added advantage.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

We are seeking experienced and talented engineers to join our team. Your main responsibilities will include designing, building, and maintaining the software that drives the global logistics industry. WiseTech Global is a leading provider of software for the logistics sector, facilitating connectivity for major companies like DHL and FedEx within their supply chains. Our organization is product and engineer-focused, with a strong commitment to enhancing the functionality and quality of our software through continuous innovation. Our primary Research and Development center in Bangalore plays a pivotal role in our growth strategies and product development roadmap. As a Lead Software Engineer, you will serve as a mentor, a leader, and an expert in your field. You should be adept at effective communication with senior management while also being hands-on with the code to deliver effective solutions. The technical environment you will work in includes technologies such as C#, Java, C++, Python, Scala, Spring, Spring Boot, Apache Spark, Hadoop, Hive, Delta Lake, Kafka, Debezium, GKE (Kubernetes Engine), Composer (Airflow), DataProc, DataStreams, DataFlow, MySQL RDBMS, MongoDB NoSQL (Atlas), UIPath, Helm, Flyway, Sterling, EDI, Redis, Elastic Search, Grafana Dashboard, and Docker. Before applying, please note that WiseTech Global may engage external service providers to assess applications. By submitting your application and personal information, you agree to WiseTech Global sharing this data with external service providers who will handle it confidentially in compliance with privacy and data protection laws.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

Cloud Kinetics is seeking a candidate with expertise in Bigdata, Hadoop, Hive SQLs, Spark, and other tools within the Bigdata Eco System. As a member of our team, you will be responsible for developing code, optimizing queries for performance, setting up environments, ensuring connectivity, and deploying code into production post-testing. Strong functional and technical knowledge is essential to fulfill project requirements, particularly in the context of Banking terminologies. Additionally, you may lead small to medium-sized projects and act as the primary contact for related tasks. Proficiency in DevOps and Agile Development Framework is crucial for this role. In addition to the core requirements, familiarity with Cloud computing, particularly AWS or Azure Cloud Services, is advantageous. The ideal candidate will possess strong problem-solving skills, adaptability to ambiguity, and a quick grasp of new and complex concepts. Experience in collaborating with teams within complex organizational structures is preferred. Knowledge of BI tools like MSTR and Tableau, as well as a solid understanding of object-oriented programming and HDFS concepts, will be beneficial. As a member of the team, your responsibilities will include working as a developer in Bigdata, Hadoop, or Data Warehousing Tools, and Cloud Computing. This entails working on Hadoop, Hive SQLs, Spark, and other tools within the Bigdata Eco System. Furthermore, you will create Scala/Spark jobs for data transformation and aggregation, develop unit tests for Spark transformations and helper methods, and design data processing pipelines to streamline operations. If you are a proactive individual with a strong technical background and a passion for leveraging cutting-edge technologies to drive innovation, we encourage you to apply for this exciting opportunity at Cloud Kinetics.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

noida, uttar pradesh

On-site

As an AI Data Scientist at our dynamic team, you will be responsible for leveraging data and advanced algorithms to drive AI initiatives, build machine learning models, and provide actionable insights contributing to the company's strategic goals. Working closely with cross-functional teams, you will deliver data-driven solutions that enhance our products and services. A Master's degree or Ph.D. in Data Science, Computer Science, Statistics, Mathematics, or related fields is required, along with 10+ years of experience in data science, analytics, and machine learning. Your technical expertise should include proficiency in programming languages such as Python, R, or SQL, a strong understanding of machine learning algorithms, statistical modeling, and data mining techniques. Experience with data visualization tools like Tableau, Power BI, or Matplotlib, knowledge of big data technologies such as Hadoop, Spark, and cloud platforms like AWS, Google Cloud, is essential. Familiarity with both relational and NoSQL database systems is also expected. Strong soft skills are crucial for this role, including excellent problem-solving and analytical skills, effective communication and collaboration abilities, and the capability to explain complex technical concepts to non-technical stakeholders. You should also possess strong project management and organizational skills. Your responsibilities will involve data analysis, including collecting, cleaning, and preprocessing data from various sources, performing exploratory data analysis, and developing statistical models and machine learning algorithms to solve business problems. Additionally, you will design, build, and validate predictive models, optimize model performance, and implement machine learning solutions in production environments. Creating data visualizations, developing dashboards and reports, and presenting data-driven recommendations to stakeholders are also key tasks. Collaboration with cross-functional teams, providing technical guidance, and mentorship, along with continuous improvement by staying updated with industry trends and driving innovation in data science practices are essential aspects of the role. GlobalLogic offers exciting projects, a collaborative environment, work-life balance, professional development opportunities, excellent benefits, and fun perks to create a rewarding work experience for its employees. Join us at GlobalLogic, a leader in digital engineering, and be part of a team that designs and builds innovative products, platforms, and digital experiences for the modern world.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies