Jobs
Interviews

14 Hadoop Ecosystem Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

indore, madhya pradesh

On-site

You should hold a Bachelor's degree in Physics, Mathematics, Engineering, Metallurgy, or Computer Science, along with an MSc in a relevant field such as Physics, Mathematics, Engineering, Computer Science, Chemistry, or Metallurgy. Additionally, you should possess at least 8 years of experience in Data Science and Analytics delivery. Your expertise should include deep knowledge of machine learning, statistics, optimization, and related fields. Proficiency in programming languages like R and Python is essential, as well as experience with machine learning skills such as Natural Language Processing (NLP) and deep learning techniques. Furthermore, you should have hands-on experience with deep learning frameworks like TensorFlow, Keras, Theano, or PyTorch, and be familiar with working with large datasets, including knowledge of extracting data from cloud platforms and the Hadoop ecosystem. Experience in Data Visualization tools like MS Power BI or Tableau, as well as proficiency in SQL and working with RDBMS for data extraction and management, is required. An understanding of Data Warehouse fundamentals, experience in productionizing Machine Learning models in cloud platforms like Azure, GCP, or AWS, and domain experience in the manufacturing industry would be advantageous. Demonstrated leadership skills in nurturing technical talent, successfully completing complex data science projects, and excellent written and verbal communication are essential. As an AI Expert with a minimum of 10 years of experience, your key responsibilities will include serving as a technical expert, providing guidance in the development and implementation of AI solutions, and collaborating with cross-functional teams to integrate AI technologies into products and services. You will actively participate in Agile methodologies, contribute to PI planning, and support the technical planning of products. Additionally, you will analyze technical requirements, propose AI-based solutions, collaborate with stakeholders to design AI models that meet business objectives, and stay updated on the latest advancements in AI technologies. Your role will involve conducting code reviews, mentoring team members, and driving the adoption of AI technologies across the organization. Strong problem-solving skills, a proactive approach to problem resolution, and the ability to work under tight deadlines without compromising quality are crucial for this role. Overall, you will play a critical role in driving significant impact and value in building and growing the Data Science Centre of Excellence, providing machine learning methodology leadership, and designing various POCs using ML/DL/NLP solutions for enterprise problems. Your ability to learn new technologies and techniques, work in a fast-paced environment, and partner with the business to unlock value through data projects will be key to your success in this position.,

Posted 1 day ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

As an experienced professional with 5 to 10 years of experience in the field of information technology, you will be responsible for creating data models for corporate analytics in compliance with standards, ensuring usability and conformance across the enterprise. Your role will involve developing data strategies, ensuring vocabulary consistency, and managing data transformations through intricate analytical relationships and access paths, including data mappings at the data-field level. Collaborating with Product Management and Business stakeholders, you will identify and evaluate data sources necessary to achieve project and business objectives. Working closely with Tech Leads and Product Architects, you will gain insights into end-to-end data implications, data integration, and the functioning of business systems. Additionally, you will collaborate with DQ Leads to address data integrity improvements and quality resolutions at the source. This role requires domain knowledge in supply chain, retail, or inventory management. The critical skills needed for this position include a strong understanding of various software platforms and development technologies, proficiency in SQL, RDBMS, Data Lakes, and Warehouses, and knowledge of the Hadoop ecosystem, Azure, ADLS, Kafka, Apache Delta, and Databricks/Spark. Experience with data modeling tools like ERStudio or Erwin would be advantageous. Effective collaboration with Product Managers, Technology teams, and Business Partners, along with familiarity with Agile and DevOps techniques, is essential. Excellent communication skills, both written and verbal, are also key for success in this role. Preferred qualifications for this position include a bachelor's degree in business information technology, computer science, or a related discipline. This is a full-time position located in Bangalore, Bengaluru, Delhi, Kolkata, or Navi Mumbai. If you meet these requirements and are interested in this opportunity, please apply online. The digitalxnode evaluation team will review your resume, and if your profile is selected, they will reach out to you for further steps. We will retain your information in our database for future job openings.,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

We are looking for a talented Python QA Automation Engineer with expertise in cloud technologies, specifically Google Cloud Platform (GCP). As a Python QA Automation Engineer, you will be responsible for designing, implementing, and maintaining automated testing frameworks to ensure the quality and reliability of software applications deployed on GCP. This role requires a strong background in Python programming, QA automation, and cloud-based environments. You will collaborate with internal teams to solve complex problems in quality and development, while gaining a deep understanding of networking and access technologies in the Cloud. Your responsibilities will include leading or contributing to engineering efforts, from planning to execution, to address engineering challenges effectively. To be successful in this role, you should have 4 to 8 years of experience in test development and automation tools development. You will design and build advanced automated testing frameworks, tools, and test suites. Proficiency in GoLang programming, experience with Google Cloud Platform, Kubernetes, Docker, Helm, Ansible, and building internal tools are essential. Additionally, you should have expertise in backend testing, creating test cases and test plans, and defining optimal test suites for various testing scenarios. Experience in CI/CD pipelines, Python programming, Linux environments, PaaS and/or SaaS platforms, and the Hadoop ecosystem is advantageous. A solid understanding of computer science fundamentals and data structures is required. Excellent communication and collaboration skills are necessary for effective teamwork. Benefits of joining our team include a competitive salary and benefits package, talent development opportunities, exposure to cutting-edge technologies, and various employee engagement initiatives. We are committed to fostering diversity and inclusion in the workplace, offering hybrid work options, flexible hours, and accessible facilities for employees with disabilities. If you are ready to accelerate your growth professionally and personally, impact the world with innovative technologies, and thrive in a diverse and inclusive environment, join us at Persistent. Unlock your full potential and embark on a rewarding career journey with us.,

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

The Applications Development Senior Programmer Analyst position is a vital role where you will participate in establishing and implementing new or revised application systems and programs in collaboration with the Technology team. Your main goal will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will be responsible for monitoring and controlling all phases of the development process including analysis, design, construction, testing, and implementation. Furthermore, providing user and operational support on applications to business users will also be part of your role. You will need to utilize your in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business processes, system processes, and industry standards, and make evaluative judgments. It will be your responsibility to recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. Consulting with users/clients and other technology groups on issues, recommending advanced programming solutions, and assisting in the installation of customer exposure systems will also be part of your duties. Additionally, you will ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. You will be expected to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a Subject Matter Expert (SME) to senior stakeholders and/or other team members. Your qualifications should include 8+ years of Development experience with expertise in Hadoop Ecosystem, Java Server-side development, Scala programming, Spark expertise, Data Analysis using SQL, a financial background, Python, Linux, proficiency in Reporting Tools like Tableau, Stakeholder Management, and a history of delivering against agreed objectives. You should also possess the ability to multitask, work under pressure, pick up new concepts and apply knowledge, demonstrate problem-solving skills, have an enthusiastic and proactive approach with a willingness to learn, and have excellent analytical and process-based skills. Ideally, you should hold a Bachelor's degree or equivalent experience. Please note that this job description provides a high-level review of the types of work performed, and other job-related duties may be assigned as required.,

Posted 4 days ago

Apply

3.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You should have strong experience in PySpark, Python, Unix scripting, SparkSQL, and Hive. You must be proficient in writing SQL queries, creating views, and possess excellent oral and written communication skills. Prior experience in the Insurance domain would be beneficial. A good understanding of the Hadoop Ecosystem including HDFS, Map Reduce, Pig, Hive, Oozie, and Yarn is required. Knowledge of AWS services such as Glue, AWS S3, Lambda function, Step Function, and EC2 is essential. Experience in data migration from platforms like Hive/S3 to Data Bricks is a plus. You should be able to prioritize, plan, organize, and manage multiple tasks efficiently while delivering high-quality work. As a candidate, you should have 6-8 years of technical experience in PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), with at least 3 years of experience in Big Data/ETL using Python, Spark, and Hive, along with 3+ years of experience in AWS. Your primary key skills should include PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), and Big Data with Python, Spark, and Hive experience. Exposure to Big Data migration is also important. Secondary key skills that would be beneficial for this role include Informatica BDM/Power center, Data Bricks, and MongoDB.,

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As a skilled candidate for this position, you should possess a minimum of 8 to 10 years of experience in Java, REST API, and Spring Boot. Additionally, you must have hands-on experience with AngularJS, ReactJS, or VueJS. A bachelor's degree or higher in computer science, data science, or a related field is required. Your role will involve working with data cleaning, visualization, and reporting, requiring practical experience in these areas. Previous exposure to an agile environment is essential for success in this position. Your excellent analytical and problem-solving skills will be key assets in meeting the job requirements. In addition to the mandatory qualifications, familiarity with the Hadoop ecosystem and experience with AWS (EMR) would be advantageous. Ideally, you should have a minimum of 2 years of experience with real-time data stream platforms like Kafka and Spark Streaming. Your ability to navigate and utilize the context menu efficiently will also be beneficial in this role. Excellent communication and interpersonal skills will be necessary for effective collaboration within the team and with stakeholders.,

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Java Developer, you will be responsible for utilizing your 8 to 10 years of experience in Java, REST API, and Spring boot to develop efficient and scalable solutions. Your expertise in Angular JS, React JS, or View JS will be essential for creating dynamic and interactive user interfaces. A Bachelors degree or higher in computer science, data science, or a related field is required to ensure a strong foundation in software development. Your role will involve hands-on experience with data cleaning, visualization, and reporting, enabling you to contribute to data-driven decision-making processes. Working in an agile environment, you will apply your excellent analytical and problem-solving skills to address complex technical challenges effectively. Your communication and interpersonal skills will be crucial for collaborating with team members and stakeholders. Additionally, familiarity with the Hadoop ecosystem and experience with AWS (EMR) would be advantageous. Having at least 2 years of relevant experience with real-time data stream platforms like Kafka and Spark Streaming will further enhance your capabilities in building real-time data processing solutions. If you are a proactive and innovative Java Developer looking to work on cutting-edge technologies and contribute to impactful projects, this role offers an exciting opportunity for professional growth and development.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a highly skilled and motivated Python, AWS, Big Data Engineer to join our data engineering team. The ideal candidate should have hands-on experience with the Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. Your responsibilities will include designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives. Virtusa is a company that values teamwork, quality of life, and professional and personal development. We are proud to have a team of 27,000 people globally who care about your growth and seek to provide you with exciting projects, opportunities, and work with state-of-the-art technologies throughout your career with us. At Virtusa, we believe in the potential of great minds coming together. We emphasize collaboration and a team environment, providing a dynamic place for talented individuals to nurture new ideas and strive for excellence.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

The Applications Development Technology Lead Analyst role is a senior position where you will be responsible for implementing new or updated application systems and programs in collaboration with the Technology team. Your main objective will be to lead applications systems analysis and programming activities. Your responsibilities will include partnering with various management teams to ensure the integration of functions to achieve goals, identifying necessary system enhancements for new products and process improvements, resolving high-impact problems/projects by evaluating complex business processes, providing expertise in applications programming, ensuring application design aligns with the architecture blueprint, developing standards for coding, testing, debugging, and implementation, gaining comprehensive knowledge of business areas integration, analyzing issues to develop innovative solutions, advising mid-level developers and analysts, assessing risks in business decisions, and being a team player who can adapt to changing priorities. The required skills for this role include strong knowledge in Spark using Java/Scala & Hadoop Ecosystem with hands-on experience in Spark Streaming, proficiency in Java Programming with experience in the Spring Boot framework, familiarity with database technologies such as Oracle, Starburst & Impala query engine, and knowledge of bank reconciliations tools like Smartstream TLM Recs Premium / Exceptor / Quickrec is an added advantage. To qualify for this position, you should have 10+ years of relevant experience in Apps Development or systems analysis role, extensive experience in system analysis and programming of software applications, experience in managing and implementing successful projects, be a Subject Matter Expert (SME) in at least one area of Applications Development, ability to adjust priorities quickly, demonstrated leadership and project management skills, clear and concise communication skills, experience in building/implementing reporting platforms, possess a Bachelor's degree/University degree or equivalent experience (Master's degree preferred). This job description is a summary of the work performed, and other job-related duties may be assigned as needed.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at Cisco ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of the cloud and big data platforms. Your role will involve representing the NADP SRE team, contributing to the technical roadmap, and collaborating with cross-functional teams to design, build, and maintain SaaS systems operating at multi-region scale. Your efforts will be crucial in supporting machine learning (ML) and AI initiatives by ensuring the platform infrastructure is robust, efficient, and aligned with operational excellence. You will be tasked with designing, building, and optimizing cloud and data infrastructure to guarantee high availability, reliability, and scalability of big-data and ML/AI systems. This will involve implementing SRE principles such as monitoring, alerting, error budgets, and fault analysis. Additionally, you will collaborate with various teams to create secure and scalable solutions, troubleshoot technical problems, lead the architectural vision, and shape the technical strategy and roadmap. Your role will also encompass mentoring and guiding teams, fostering a culture of engineering and operational excellence, engaging with customers and stakeholders to understand use cases and feedback, and utilizing your strong programming skills to integrate software and systems engineering. Furthermore, you will develop strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices. To be successful in this role, you should have relevant experience (8-12 yrs) and a bachelor's engineering degree in computer science or its equivalent. You should possess the ability to design and implement scalable solutions, hands-on experience in Cloud (preferably AWS), Infrastructure as Code skills, experience with observability tools, proficiency in programming languages such as Python or Go, and a good understanding of Unix/Linux systems and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure is essential, along with a sense of ownership and accountability in architecting software and infrastructure at scale. Additional qualifications that would be advantageous include experience with the Hadoop Ecosystem, certifications in cloud and security domains, and experience in building/managing a cloud-based data platform. Cisco encourages individuals from diverse backgrounds to apply, as the company values perspectives and skills that emerge from employees with varied experiences. Cisco believes in unlocking potential and creating diverse teams that are better equipped to solve problems, innovate, and make a positive impact.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of cloud and big data platforms. Your role will involve representing the NADP SRE team, working in a dynamic environment, and providing technical leadership in defining and executing the team's technical roadmap. Collaborating with cross-functional teams, including software development, product management, customers, and security teams, is essential. Your contributions will directly impact the success of machine learning (ML) and AI initiatives by ensuring a robust and efficient platform infrastructure aligned with operational excellence. In this role, you will design, build, and optimize cloud and data infrastructure to ensure high availability, reliability, and scalability of big-data and ML/AI systems. Collaboration with cross-functional teams will be crucial in creating secure, scalable solutions that support ML/AI workloads and enhance operational efficiency through automation. Troubleshooting complex technical problems, conducting root cause analyses, and contributing to continuous improvement efforts are key responsibilities. You will lead the architectural vision, shape the team's technical strategy and roadmap, and act as a mentor and technical leader to foster a culture of engineering and operational excellence. Engaging with customers and stakeholders to understand use cases and feedback, translating them into actionable insights, and effectively influencing stakeholders at all levels are essential aspects of the role. Utilizing strong programming skills to integrate software and systems engineering, building core data platform capabilities and automation to meet enterprise customer needs, is a crucial requirement. Developing strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices is also part of the role. Qualifications for this position include 8-12 years of relevant experience and a bachelor's engineering degree in computer science or its equivalent. Candidates should have the ability to design and implement scalable solutions with a focus on streamlining operations. Strong hands-on experience in Cloud, preferably AWS, is required, along with Infrastructure as a Code skills, ideally with Terraform and EKS or Kubernetes. Proficiency in observability tools like Prometheus, Grafana, Thanos, CloudWatch, OpenTelemetry, and the ELK stack is necessary. Writing high-quality code in Python, Go, or equivalent programming languages is essential, as well as a good understanding of Unix/Linux systems, system libraries, file systems, and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure, architecting software and infrastructure at scale, and certifications in cloud and security domains are beneficial qualifications for this role. Cisco emphasizes diversity and encourages candidates to apply even if they do not meet every single qualification. Diverse perspectives and skills are valued, and Cisco believes that diverse teams are better equipped to solve problems, innovate, and create a positive impact.,

Posted 2 weeks ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Bengaluru

Work from Office

Senior Data Engineer (Databricks, PySpark, SQL, Cloud Data Platforms, Data Pipelines) Job Summary Synechron is seeking a highly skilled and experienced Data Engineer to join our innovative analytics team in Bangalore. The primary purpose of this role is to design, develop, and maintain scalable data pipelines and architectures that empower data-driven decision making and advanced analytics initiatives. As a critical contributor within our data ecosystem, you will enable the organization to harness large, complex datasets efficiently, supporting strategic business objectives and ensuring high standards of data quality, security, and performance. Your expertise will directly contribute to building robust, efficient, and secure data solutions that drive business value across multiple domains. Software Required Software & Tools: Databricks Platform (Hands-on experience with Databricks notebooks, clusters, and workflows) PySpark (Proficient in developing and optimizing Spark jobs) SQL (Advance proficiency in writing complex SQL queries and optimizing queries) Data Orchestration Tools such as Apache Airflow or similar (Experience in scheduling and managing data workflows) Cloud Data Platforms (Experience with cloud environments such as AWS, Azure, or Google Cloud) Data Warehousing Solutions (Snowflake highly preferred) Preferred Software & Tools: Kafka or other streaming frameworks (e.g., Confluent, MQTT) CI/CD tools for data pipelines (e.g., Jenkins, GitLab CI) DevOps practices for data workflows Programming LanguagesPython (Expert level), and familiarity with other languages like Java or Scala is advantageous Overall Responsibilities Architect, develop, and maintain scalable, resilient data pipelines and architectures supporting business analytics, reporting, and data science use cases. Collaborate closely with data scientists, analysts, and cross-functional teams to gather requirements and deliver optimized data solutions aligned with organizational goals. Ensure data quality, consistency, and security across all data workflows, adhering to best practices and compliance standards. Optimize data processes for enhanced performance, reliability, and cost efficiency. Integrate data from multiple sources, including cloud data services and streaming platforms, ensuring seamless data flow and transformation. Lead efforts in performance tuning and troubleshooting data pipelines to resolve bottlenecks and improve throughput. Stay up-to-date with emerging data engineering technologies and contribute to continuous improvement initiatives within the team. Technical Skills (By Category) Programming Languages: EssentialPython, SQL PreferredScala, Java Databases/Data Management: EssentialData modeling, ETL/ELT processes, data warehousing (Snowflake experience highly preferred) Preferred NoSQL databases, Hadoop ecosystem Cloud Technologies: EssentialExperience with cloud data services (AWS, Azure, GCP) and deployment of data pipelines in cloud environments PreferredCloud native data tools and architecture design Frameworks and Libraries: EssentialPySpark, Spark SQL, Kafka, Airflow PreferredStreaming frameworks, TensorFlow (for data prep) Development Tools and Methodologies: EssentialVersion control (Git), CI/CD pipelines, Agile methodologies PreferredDevOps practices in data engineering, containerization (Docker, Kubernetes) Security Protocols: Familiarity with data security, encryption standards, and compliance best practices Experience Minimum of 8 years of professional experience in Data Engineering or related roles Proven track record of designing and deploying large-scale data pipelines using Databricks, PySpark, and SQL Practical experience in data modeling, data warehousing, and ETL/ELT workflows Experience working with cloud data platforms and streaming data frameworks such as Kafka or equivalent Demonstrated ability to work with cross-functional teams, translating business needs into technical solutions Experience with data orchestration and automation tools is highly valued Prior experience in implementing CI/CD pipelines or DevOps practices for data workflows (preferred) Day-to-Day Activities Design, develop, and troubleshoot data pipelines for ingestion, transformation, and storage of large datasets Collaborate with data scientists and analysts to understand data requirements and optimize existing pipelines Automate data workflows and improve pipeline efficiency through performance tuning and best practices Conduct data quality audits and ensure data security protocols are followed Manage and monitor data workflows, troubleshoot failures, and implement fixes proactively Contribute to documentation, code reviews, and knowledge sharing within the team Stay informed of evolving data engineering tools, techniques, and industry best practices, incorporating them into daily work processes Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, or related field Relevant certifications such as Databricks Certified Data Engineer, AWS Certified Data Analytics, or equivalent (preferred) Continuous learning through courses, workshops, or industry conferences on data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills with a focus on scalable solutions Excellent communication skills to effectively collaborate with technical and non-technical stakeholders Ability to prioritize tasks, manage time effectively, and deliver within tight deadlines Demonstrated leadership in guiding team members and driving project success Adaptability to evolving technological landscapes and innovative thinking Commitment to data privacy, security, and ethical handling of information

Posted 1 month ago

Apply

6.0 - 8.0 years

25 - 30 Lacs

Bengaluru

Work from Office

6+ years of experience in information technology, Minimum of 3-5 years of experience in managing and administering Hadoop/Cloudera environments. Cloudera CDP (Cloudera Data Platform), Cloudera Manager, and related tools. Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Impala, etc.). Linux system administration with experience with scripting languages (Python, Bash, etc.) and configuration management tools (Ansible, Puppet, etc.) Tools like Kerberos, Ranger, Sentry), Docker, Kubernetes, Jenkins Cloudera Certified Administrator for Apache Hadoop (CCAH) or similar certification. Cluster Management, Optimization, Best practice implementation, collaboration and support.

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Chennai

Hybrid

Duration: 8Months Work Type: Onsite Position Description: Looking for qualified Data Scientists who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, and Optimization. Potential candidates should have hands-on experience in applying first principles methods, machine learning, data mining, and text mining techniques to build analytics prototypes that work on massive datasets. Candidates should have experience in manipulating both structured and unstructured data in various formats, sizes, and storage-mechanisms. Candidates should have excellent problem-solving skills with an inquisitive mind to challenge existing practices. Candidates should have exposure to multiple programming languages and analytical tools and be flexible to using the requisite tools/languages for the problem at-hand. Skills Required: Machine Learning, GenAI, LLM Skills Preferred: Python, Google Cloud Platform, Big Query Experience Required: 3+ years of hands-on experience in using machine learning/text mining tools and techniques such as Clustering/classification/decision trees, Random forests, Support vector machines, Deep Learning, Neural networks, Reinforcement learning, and other numerical algorithms Experience Preferred: 3+ years of experience in at least one of the following languages: Python, R, MATLAB, SAS Experience with GoogleCloud Platform (GCP) including VertexAI, BigQuery, DBT, NoSQL database and Hadoop Ecosystem Education Required: Bachelor's Degree

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies