Jobs
Interviews

3291 Big Data Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a member of the Google Cloud Consulting Professional Services team, you will have the opportunity to contribute to the success of businesses by guiding them through their cloud journey and leveraging Google's global network, data centers, and software infrastructure. Your role will involve assisting customers in transforming their businesses by utilizing technology to connect with customers, employees, and partners. Your responsibilities will include interacting with stakeholders to understand customer requirements and providing recommendations for solution architectures. You will collaborate with technical leads and partners to lead migration and modernization projects to Google Cloud Platform (GCP). Additionally, you will design, build, and operationalize data storage and processing infrastructure using Cloud native products, ensuring data quality and governance procedures are in place to maintain accuracy and reliability. In this role, you will work on data migrations, modernization projects, and design data processing systems optimized for scaling. You will troubleshoot platform/product tests, understand data governance and security controls, and travel to customer sites to deploy solutions and conduct workshops to educate and empower customers. Furthermore, you will be responsible for translating project requirements into goals and objectives, creating work breakdown structures to manage internal and external stakeholders effectively. You will collaborate with Product Management and Product Engineering teams to drive excellence in products and contribute to the digital transformation of organizations across various industries. By joining this team, you will play a crucial role in shaping the future of businesses of all sizes and assisting them in leveraging Google Cloud to accelerate their digital transformation journey.,

Posted 1 week ago

Apply

2.0 - 6.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer II / Senior Data Engineer Job Location : Bengaluru, Pune - India Job Summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities: Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Step Functions. Collaborate with cross-functional teams to gather requirements and design solutions for complex data engineering projects. Develop ETL/ELT pipelines using Python scripts and SQL queries to extract insights from structured and unstructured data sources. Implement web scraping techniques to collect relevant data from various websites and APIs. Ensure high availability of the system by implementing monitoring tools like CloudWatch. Desired Profile: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The IT Quality Intermediate Analyst role is a developmental position within the Technology Quality job family at Citigroup. As an Intermediate Analyst, you will be responsible for independently addressing various challenges and have the freedom to solve complex problems. Your role will involve integrating specialized knowledge with industry standards, understanding how your team contributes to the overall objectives, and applying analytical thinking and data analysis tools effectively. Attention to detail is crucial in making judgments and recommendations based on factual information, as your decisions may have a broader business impact. Your responsibilities will include supporting initiatives related to User Acceptance Testing (UAT) processes and product rollouts. You will collaborate with technology project managers, UAT professionals, and users to design and implement appropriate scripts/plans for application testing. Additionally, you will support automation initiatives by using existing automation tools for testing and ensuring the automation of assigned tasks through analysis. In this role, you will conduct various process monitoring, product evaluation, and audit assignments of moderate complexity. You will report issues, make recommendations for solutions, and ensure project standards and procedures are documented and followed throughout the software development life cycle. Monitoring products and processes for conformance to standards and procedures, documenting findings, and conducting root cause analyses to provide recommended improvements will also be part of your responsibilities. You will need to gather, maintain, and create reports on quality metrics, exhibit a good understanding of procedures and concepts within your technical area, and have a basic knowledge of these elements in other areas. By making evaluative judgments based on factual information and resolving problems with acquired technical experience, you will directly impact the business and ensure the quality of work provided by yourself and others. Qualifications: - 3-6 years of Quality Assurance (QA) experience in the Financial Services industry preferred - Experience in Big Data, ETL testing, and requirement reviews - Understanding of QA within the Software Development Lifecycle (SDLC) and QA methodologies - Knowledge of Quality Processes - Logical analysis skills, attention to detail, and problem-solving abilities - Ability to work to deadlines - Clear and concise written and verbal communication skills - Experience in defining, designing, and executing test cases - Automation domain experience using Python tech stack - Experience in Python and Pyspark Education: - Bachelor's/University degree or equivalent experience You will also provide informal guidance to new team members, perform other assigned duties and functions, and assess risk when making business decisions to safeguard Citigroup and its assets. Your role will involve compliance with laws, rules, and regulations, as well as adherence to policies and ethical standards. If you require a reasonable accommodation due to a disability for using search tools or applying for a career opportunity, please review the Accessibility at Citi information. For more details on Citigroup's EEO Policy Statement and the Know Your Rights poster, please refer to the relevant documents.,

Posted 1 week ago

Apply

6.0 - 12.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You will be responsible for leveraging your 6-12 years of experience in Data Warehouse and Big Data technologies to contribute to our team in Trivandrum. Your expertise in Programming Languages such as Scala, Spark, PySpark, Python, and SQL, along with Big Data Technologies like Hadoop, Hive, Pig, and MapReduce will be crucial for this role. Additionally, your proficiency in ETL & Data Engineering including Data Warehouse Design, ETL, Data Analytics, Data Mining, and Data Cleansing will be highly valued. As a part of our team, you will be expected to have hands-on experience with Cloud Platforms like GCP and Azure, as well as tools & frameworks such as Apache Hadoop, Airflow, Kubernetes, and Containers. Your skills in data pipeline creation, optimization, troubleshooting, and data validation will play a key role in ensuring the efficiency and accuracy of our data processes. Ideally, you should have at least 4 years of experience working with Scala, Spark, PySpark, Python, and SQL, in addition to 3+ years of strategic data planning, governance, and standard procedures. Experience in Agile environments and a good understanding of Java, ReactJS, and Node.js will be beneficial for this role. Moreover, your ability to work with data analytics, machine learning, and optimization will be advantageous. Knowledge of managing big data workloads, containerized environments, and experience in analyzing large datasets to optimize data workflows will further strengthen your profile for this position. UST is a global digital transformation solutions provider with a track record of working with some of the world's best companies for over 20 years. With a team of over 30,000 employees in 30 countries, we are committed to making a real impact through transformation. If you are passionate about innovation, agility, and driving positive change through technology, we invite you to join us on this journey of creating boundless impact and touching billions of lives in the process.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Senior Staff Software Engineer in Data Lake House Engineering, you will play a crucial role in designing and implementing the Data Lake house platform, supporting both Data Engineering and Data Lake house applications. Your responsibilities will include overseeing Data Engineering pipeline productionalization, end-to-end data pipelines, model development, deployment, monitoring, refresh, etc. Additionally, you will be involved in driving technology development and architecture to ensure the platforms, systems, tools, models, and services meet the technical standards for security, quality, reliability, usability, scalability, performance, efficiency, and operability to meet the evolving needs of Wex and its customers. It is essential to balance both near-term and long-term requirements in collaboration with other teams across the organization. Your technical ownership will extend to Wex's Data Lake House Data architecture and service technology implementations, emphasizing architecture, technical direction, engineering best practices, and quality/compliance. Collaboration with Platform engineering and Data Lake House Engineering teams will be a key aspect of your role. The vision behind Wex's Data Lake House revolves around creating a unified, scalable, and intelligent data infrastructure that enables the organization to leverage its data effectively. This includes goals such as data democratization, agility and scalability, and advanced insights and innovation through Data & AI technology. We are seeking a highly motivated and experienced Software Engineer to join our organization and contribute to building out the Data Lake House Platform for Wex. Reporting to the Sr. Manager of Data Lake House Engineering in Bangalore, the ideal candidate will possess deep technical expertise in building and scaling data lake house environments, coupled with strong leadership and communication skills to align efforts across the organization. Your impact will be significant as you lead and drive the development of technology and platform for the company's Data Lake house requirements, ensuring functional richness, reliability, performance, and flexibility of the Data Lake house Platform. You will be instrumental in designing the architecture, leading the implementation of the Data Lake house System and services, and challenging the status quo to drive technical solutions that effectively serve the broad risk area of Wex. Collaboration with various engineering teams, information security teams, and external partners will be essential to ensure the security, privacy, and integration of the Data Lake Platform. Moreover, you will be responsible for creating, prioritizing, managing, and executing roadmaps and project plans, as well as reporting on the status of development, quality, operations, and system performance. Your role will involve driving the technical vision and strategy of Data Lake to meet business needs, setting high standards for your team, providing technical guidance and mentorship, and fostering an environment of continuous learning and innovation. Upholding strong engineering principles and ensuring a culture of transparency and inclusion will be integral to your leadership. To be successful in this role, you should bring at least 10 years of software design and development experience at a large scale and have strong software development skills in your chosen programming language. Experience with Data Lakehouse formats, Spark programming, cloud architecture tools and services, CI/CD automation, and agile development practices will be advantageous. Additionally, you should possess excellent analytical skills, mentorship capabilities, and strong written and verbal communication skills. In terms of personal characteristics, you should demonstrate a collaborative, mission-driven style, high standards of integrity and corporate stewardship, and the ability to operate in a fast-paced entrepreneurial environment. Leading with empathy, fostering a culture of trust and transparency, and communicating effectively in various settings will be key to your success. You should also exhibit talent development and scouting abilities, intellectual curiosity, learning agility, and the capacity to drive change through influence and stakeholder management across a complex business environment.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an Infoscion, your primary responsibility will involve addressing customer issues, diagnosing problem areas, designing innovative solutions, and facilitating deployment to ensure client satisfaction. You will be involved in developing proposals, contributing to solution design, configuring products, conducting demonstrations, and preparing effort estimates that align with customer budgetary requirements and organizational financial guidelines. Additionally, you will actively lead small projects, participate in unit-level and organizational initiatives to deliver high-quality, value-adding solutions to customers. If you are passionate about helping clients navigate their digital transformation journey, this role is tailored for you. Your technical expertise should include proficiency in Big Data technologies, such as Big Table, Cloud Integration, specifically Azure Data Factory (ADF), and experience with Data on Cloud Platform AWS. Moreover, you should possess the ability to develop value-driven strategies, understand software configuration management systems, stay updated on the latest technologies and industry trends, and demonstrate logical thinking and problem-solving skills. Furthermore, familiarity with financial processes, pricing models, industry domain knowledge, client interfacing skills, project management, and team management are essential for this role. In summary, this position requires a proactive and innovative mindset, technical proficiency in Big Data and Cloud technologies, strategic thinking, problem-solving abilities, and effective communication and leadership skills to deliver impactful solutions to clients.,

Posted 1 week ago

Apply

7.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Data Engineering Architect, you will be responsible for designing and implementing large scale data centric distributed applications. With a BTech degree in computer science or engineering, or equivalent work experience, you will have a strong background in architecting and operating cloud-based solutions. Your expertise will include a deep understanding of core disciplines such as compute, networking, storage, security, and databases. Additionally, you should possess knowledge of data engineering concepts like storage, governance, cataloging, data quality, and data modeling. You should be well-versed in various architecture patterns like data lake, data lake house, and data mesh. Experience with Data Warehousing concepts and hands-on work with tools like Hive, Redshift, Snowflake, and Teradata is essential. You will also have experience in migrating legacy solutions to the cloud and working with AWS services like EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone. A thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, and HBase is required. Familiarity with designing analytical solutions using AWS cognitive services like Textract, Comprehend, and Rekognition, combined with Sagemaker, is advantageous. Your experience with modern development workflows, programming or scripting languages (Python/Java/Scala), and AWS certifications will be valuable assets in this role. In this position, you will drive innovation within the Data Engineering domain by designing reusable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, managing ambiguity, and collaborating with business stakeholders are key aspects of this role. Strong presentation skills and the ability to engage with various stakeholders, including executives, IT Management, and developers, are essential. Your responsibilities will include driving technology/software sales or pre-sales consulting discussions, ensuring end-to-end ownership of tasks, and delivering high-quality software development with comprehensive documentation. Sharing knowledge and experience with other teams, conducting technical training sessions, and contributing to whitepapers, case studies, and blogs are also part of the role. Key Skills: AWS, Big Data, Spark, Technical Architecture,

Posted 1 week ago

Apply

18.0 - 22.0 years

0 Lacs

noida, uttar pradesh

On-site

This is a senior leadership position within the Business Information Management Practice, where you will be responsible for the overall vision, strategy, delivery, and operations of key accounts in BIM. You will work closely with the global executive team, subject matter experts, solution architects, project managers, and client teams to conceptualize, build, and operate Big Data Solutions. Your role will involve communicating with internal management, client sponsors, and senior leaders on project status, risks, solutions, and more. As a Client Delivery Leadership Role, you will be accountable for delivering at least $10 M + revenue using information management solutions such as Big Data, Data Warehouse, Data Lake, GEN AI, Master Data Management System, Business Intelligence & Reporting solutions, IT Architecture Consulting, Cloud Platforms (AWS/AZURE), and SaaS/PaaS based solutions. In addition, you will play a crucial Practice and Team Leadership Role, exhibiting qualities like self-driven initiative, customer focus, problem-solving skills, learning agility, ability to handle multiple projects, excellent communication, and leadership skills to coach and mentor staff. As a qualified candidate, you should hold an MBA in Business Management and a Bachelor of Computer Science. You should have 18+ years of prior experience, preferably including at least 5 years in the Pharma Commercial domain, delivering customer-focused information management solutions. Your skills should encompass successful end-to-end DW implementations using technologies like Big Data, Data Management, and BI technologies. Leadership qualities, team management experience, communication skills, and hands-on knowledge of databases, SQL, and reporting solutions are essential. Preferred skills include teamwork, leadership, motivation to learn and grow, ownership, cultural fit, talent management, and capability building/thought leadership. As part of Axtria, a global provider of cloud software and data analytics to the Life Sciences industry, you will contribute to transforming the product commercialization journey to drive sales growth and improve healthcare outcomes for patients. Axtria values technology innovation and offers a transparent and collaborative culture with opportunities for training, career progression, and meaningful work in a fun environment. If you are a driven and experienced professional with a passion for leadership in information management technology and the Pharma domain, this role offers a unique opportunity to make a significant impact and grow within a dynamic and innovative organization.,

Posted 1 week ago

Apply

12.0 - 20.0 years

35 - 40 Lacs

Mumbai

Work from Office

Job Title: Big Data Developer Project Support & Mentorship Location: Mumbai Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.

Posted 1 week ago

Apply

5.0 - 10.0 years

27 - 40 Lacs

Noida, Pune, Bengaluru

Work from Office

Description: We are seeking a highly skilled Senior Data Engineer with strong expertise in Python development and MySQL, along with hands-on experience in Big Data technologies, PySpark, and cloud platforms such as AWS, GCP, or Azure. The ideal candidate will play a critical role in designing and developing scalable data pipelines and infrastructure to support advanced analytics and data-driven decision-making across teams. Requirements: 7 to 12 years of overall experience in data engineering or related domains. Proven ability to work independently on analytics engines like Big Data and PySpark. Strong hands-on experience in Python programming, with a focus on data handling and backend services. Proficiency in MySQL, with the ability to write and optimize complex queries; knowledge of Redis is a plus. Solid understanding and hands-on experience with public cloud services (AWS, GCP, or Azure). Familiarity with monitoring tools such as Grafana, ELK, Loki, and Prometheus. Experience with IaC tools like Terraform and Helm. Proficiency in containerization and orchestration using Docker and Kubernetes. Strong collaboration and communication skills to work in agile and cross-functional environments. Job Responsibilities: Design, develop, and maintain robust data pipelines using Big Data and PySpark for ETL/ELT processes. Build scalable and efficient data solutions across cloud platforms (AWS/GCP/Azure) using modern tools and technologies Write high-quality, maintainable, and efficient code in Python for data engineering tasks. Develop and optimize complex queries using MySQL and work with caching systems like Redis. Implement monitoring and logging using Grafana, ELK, Loki, and Prometheus to ensure system reliability and performance. Use Terraform and Helm for infrastructure provisioning and automation (Infrastructure as Code). Leverage Docker and Kubernetes for containerization and orchestration of services. Collaborate with cross-functional teams including engineering, product, and analytics to deliver impactful data solutions. Contribute to system architecture decisions and influence best practices in cloud data infrastructure. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 1 week ago

Apply

12.0 - 22.0 years

25 - 32 Lacs

Chennai, Bengaluru

Hybrid

Technical Manager, Data Science & AI Location : Chennai/Bangalore Experience : 15+ Years Employment Type : Full Time Role Description: We are seeking a visionary Technical Manager to lead our Data Science and AI function. This role is pivotal in leveraging advanced analytical techniques, machine learning, and artificial intelligence to solve complex business problems and create innovative data products. The ideal candidate will be a recognized thought leader, capable of leading and inspiring a team of data scientists, collaborating with diverse stakeholders, and driving the successful implementation of AI/ML solutions from ideation to deployment. Responsibilities Define and execute the strategic roadmap for data science and AI initiatives, identifying opportunities for innovation and competitive advantage. Lead, mentor, and develop a team of data scientists and ML engineers, fostering a culture of scientific rigor, experimentation, and ethical AI development. Oversee the end-to-end lifecycle of machine learning models, from problem definition and data exploration to model building, validation, deployment, and monitoring. Translate complex business challenges into data science problems, developing robust and scalable solutions. Partner with pre-sales, sales, marketing, Data Engineering, Business Intelligence, and other departments to identify use cases, gather requirements, and integrate AI/ML solutions into business processes. Stay current with the latest advancements in AI, machine learning, deep learning, and natural language processing, and drive their adoption where beneficial. Establish best practices for MLOps, ensuring efficient and reliable deployment, monitoring, and maintenance of ML models in production environments. Tools & Technologies Programming Languages : Python (Scikit-learn, TensorFlow, Keras, PyTorch, Pandas, NumPy, SciPy), R. Machine Learning Frameworks : TensorFlow, PyTorch, Scikit-learn, XGBoost, LightGBM. Big Data Frameworks : Apache Spark (PySpark), Databricks. Cloud AI/ML Services : AWS SageMaker, Azure AI Services, Google Cloud AI Platform Data Visualization: Matplotlib, Seaborn, Plotly, Dash, Streamlit. Databases : SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra). MLOps Tools : MLflow, Kubeflow, Docker, Kubernetes. Version Control: Git.

Posted 1 week ago

Apply

12.0 - 22.0 years

25 - 32 Lacs

Chennai, Bengaluru

Work from Office

Technical Manager, Data Engineering Location : Chennai/Bangalore Experience : 15+ Years Employment Type : Full Time Role Description: We are looking for a seasoned Technical Manager to lead our Data Engineering function. This role demands a deep understanding of data architecture, pipeline development, and data infrastructure. The ideal candidate will be a thought leader in the data engineering space, capable of guiding and mentoring a team, collaborating effectively with various business units, and driving the adoption of cutting-edge tools and technologies to build robust, scalable, and efficient data solutions. Responsibilities Define and champion the strategic direction for data engineering, staying abreast of industry trends and emerging technologies. Lead, mentor, and develop a high-performing team of data engineers, fostering a culture of technical excellence, innovation, and continuous learning. Design, implement, and maintain scalable, reliable, and secure data pipelines and infrastructure. Ensure data quality, integrity, and accessibility. Oversee the end-to-end delivery of data engineering projects, ensuring timely completion, adherence to best practices, and alignment with business objectives. Partner closely with pre-sales, sales, marketing, Business Intelligence, Data Science, and other departments to understand data needs, propose solutions, and support resource deployment for active data projects. Evaluate, recommend, and implement new data engineering tools, platforms, and methodologies to enhance capabilities and efficiency. Identify and address performance bottlenecks in data systems, ensuring optimal data processing and storage. Tools & Technologies Cloud Platforms : AWS (S3, Glue, EMR, Redshift, Athena, Lambda, Kinesis), Azure (Data Lake Storage, Data Factory, Databricks, Synapse Analytics), Google Cloud Platform (Cloud Storage, Dataflow, Dataproc, BigQuery). Big Data Frameworks : Apache Spark, Apache Flink, Apache Kafka, HDFS Data Warehousing/Lakes: Snowflake, Databricks Lakehouse, Google BigQuery, Amazon Redshift, Azure Synapse Analytics. ETL/ELT Tools : Apache Airflow, Talend, Informatica, DBT, Fivetran, Stitch. Data Modeling : Star Schema, Snowflake Schema, Data Vault. Databases : PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Programming Languages: Python (Pandas, PySpark), Scala, Java. Containerization/Orchestration : Docker, Kubernetes. Version Control : Git.

Posted 1 week ago

Apply

7.0 - 9.0 years

20 - 25 Lacs

Hyderabad

Work from Office

At YASH, we re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Tableau Professionals in the following areas : Experience 7-9 Years Prepare required data model in tableau from the source files Build the required dashboard based on the wireframe designed. Expertise in Tableau dashboard development Expert in Tableau data model setup Strong experience Sql Ensure compliance with data governance and security policies. Work closely with business and dev teams to translate the business/functional requirements into technical specifications that drive Big Data solutions to meet functional requirements. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 25 Lacs

Chennai

Work from Office

Your work days are brighter here. At Workday, we value our candidates privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About the Team Workday Prism Analytics is a self-service analytics solution for Finance and Human Resources teams that allows companies to bring external data into Workday, combine it with existing people or financial data, and present it via Workday s reporting framework. This gives the end user a comprehensive collection of insights that can be carried out in a flash. We design, build and maintain the data warehousing systems that underpin our Analytics products. We straddle both applications and systems, the ideal candidate for this role is someone who has a passion for solving hyper scale engineering challenges to serve the largest companies on the planet. About the Role We are looking for a highly motivated Senior Software Development Engineering Manager to build and lead our team in the Chennai office. You ll play a crucial role in growing the team and its skillset, you will collaborate with stakeholders across the globe. This is a great opportunity to lead and contribute to a dynamic and critical platform in a fast-paced environment. You will be responsible for leading a team of engineers, ensuring the seamless operation of our Analytics products. Critical responsibility will be driving team onboarding and product knowledge ramp up, cooperating closely with our core teams in Dublin and Pleasanton. This role will require strong cross-team collaboration and communication to effectively bridge time zone differences and ensure seamless workflow. You will lead by example, leveraging your deep knowledge building world class software. You will promote a diverse and inclusive environment where employees are happy, energized and engaged, and who are excited to come to work every day. Responsibilities: Build and lead a multidisciplinary development team, drive them through technical challenges, delivering high-quality solutions that power Analytics at scale. Understand and promote industry-standard methodologies Coach and mentor team members to help them to be at their best, assisting with career growth and personal development. Foster an environment where communication, teamwork and collaboration are rewarded. Participate in our 12x7 on-call rotation supporting our applications in development and customer environments. Energize your team and have fun engineering software! About You Basic Qualifications: 8+ years of experience in a Software Engineering role (preferably using Java, Scala or other similar language). 4+ years proven experience leading and managing teams delivering software in an agile environment. Bachelors degree in a computer related field or equivalent work experience. Other Qualifications: Experience in building Highly Available, Scalable, Reliable multi-tenanted big data applications on Cloud (AWS, GCP) and/or Data Center architectures. Working knowledge of distributed system principles. Experience with managing big data frameworks like Spark and/or Hadoop. Demonstrated track record of delivering performant, resilient solutions in a business-critical SaaS environment. Solid understanding and practical experience with software engineering best practices (coding standards, code reviews, SCM, CI, build processes, testing, and operations). You have a strong focus on delivering high-quality software products, continuous innovation, and you value test automation and performance engineering. You demonstrate the interpersonal skills needed to positively influence important issues or decisions in a multi-functional environment. You have the ability to communicate technical complexity in simple terms to both technical and non technical audiences. Experience supporting team members career growth and development. You put people first and ensure a psychologically safe environment for team members. Our Approach to Flexible Work With Flex Work, we re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means youll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our rolesIf so, ask your connection at Workday about our Employee Referral process!

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Work from Office

Key Responsibilities: Design, develop, and maintain large-scale data processing workflows using big data technologies Develop ETL/ELT pipelines to ingest, clean, transform, and aggregate data from various sources Work with distributed computing frameworks such as Apache Hadoop , Spark , Flink , or Kafka Optimize data pipelines for performance, reliability, and scalability Collaborate with data scientists, analysts, and engineers to support data-driven projects Implement data quality checks and validation mechanisms Monitor and troubleshoot data processing jobs and infrastructure Document data workflows, architecture, and processes for team collaboration and future maintenance

Posted 1 week ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Chennai, Bengaluru

Work from Office

Join us as a Java And Pyspark Developer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, youll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders Were offering this role at associate level What youll do In your new role, you ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You ll also be: Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills youll need Youll need at least six years of experience in PySpark, SQL, Snowflake and Big Data. Youll also need experience in J IRA, Confluence and REST API Call. Experience working with AWS in Financial domain is desired. You ll also need: Experience of working with development and testing tools, bug tracking tools and wikis Experience in multiple programming languages or low code toolsets Experience of DevOps and Agile methodology and associated toolsets Experience in developing Unit Test Cases and executing them Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance Hours 45 Job Posting Closing Date: 07/08/2025

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Building off our Cloud momentum, Oracle has formed a new organization - Health Data Intelligence. This team will focus on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. This is a net new line of business, constructed with an entrepreneurial spirit that promotes an energetic and creative environment. We are unencumbered and will need your contribution to make it a world class engineering center with the focus on excellence. Oracle Health Data Analytics has a rare opportunity to play a critical role in how Oracle Health products impact and disrupt the healthcare industry by transforming how healthcare and technology intersect. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. Define specifications for significant new projects and specify, design and develop software according to those specifications. You will perform professional software development tasks associated with the developing, designing and debugging of software applications or operating systems. Design and build distributed, scalable, and fault-tolerant software systems. Build cloud services on top of the modern OCI infrastructure. Participate in the entire software lifecycle, from design to development, to quality assurance, and to production. Invest in the best engineering and operational practices upfront to ensure our software quality bar is high. Optimize data processing pipelines for orders of magnitude higher throughput and faster latencies. Leverage a plethora of internal tooling at HDI to develop, build, deploy, and troubleshoot software. Qualifications 4+ years of experience in the software industry working on design, development and delivery of highly scalable products and services. Understanding of the entire product development lifecycle that includes understanding and refining the technical specifications, HLD and LLD of world-class products and services, refining the architecture by providing feedback and suggestions, developing, and reviewing code, driving DevOps, managing releases and operations. Strong knowledge of Java or JVM based languages. Experience with multi-threading and parallel processing. Strong knowledge of big data technologies like Spark, Hadoop Map Reduce, Crunch, etc. Past experience of building scalable, performant, and secure services/modules. Understanding of Micro Services architecture and API design Experience with Container platforms Good understanding of testing methodologies. Experience with CI/CD technologies. Experience with observability tools like Spunk, New Relic, etc Good understanding of versioning tools like Git/SVN.

Posted 1 week ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Bengaluru

Work from Office

Uber is looking for an experienced Engineering Manager to lead a team within our Container Platform Infrastructure group. As a manager within Infrastructure you will have a significant impact on the evolution of Ubers backend teams and architecture. Our mission is to make transportation as reliable as running water, and we are looking for a passionate manager to build the dependable foundation that supports that vision. The Container Platform team s mission is to build the next generation of Uber s container orchestration platform that is secure, reliable, scalable, and highly efficient. At Uber we have a complex infrastructure spanning both data centers and cloud, supporting a diverse variety of workloads (stateless, batch, stateful) each different in its characteristics and requirements. Engineering Managers at Uber exhibit the following qualities: Builds Trust Demonstrates personal excellence with empathy, authenticity, inclusivity, and fairness. Grows and Adapts Shows the ability to adapt to resilience and humility. Sets Vision Establishes team purpose and plans for execution. Operationalize Ensures operational efficiency and impact. Develops and Coaches Invests time in coaching and supports the development of others. Connects Fosters collaboration within and across teams. What youll Do Provide product and technical leadership, set goals that produce value to the business, and uphold high technical standards for your team Boost operational efficiency and development productivity of the team Manage and grow individual team members, attract and recruit new talent responsible for ever-growing technical challenges Collaborate across many areas of Uber, work with other technical, product and operation leaders across the Globe What youll need Experience. 10+ years of significant experience building scalable, fault-tolerant, and robust products and platforms. Including 5+ years in managing engineering teams of 10+ people. Ability to lead teams in India and collaborate with stakeholder and sister teams in the US. Hiring prowess. You re a strong leader who can attract talent in Bangalore, raising the bar for excellence. Bias towards action. You believe that speed and quality aren t mutually exclusive. You ve shown good judgment about shipping as fast as possible while still making sure that products are built in a sustainable, responsible way and you re comfortable making mistakes, provided you and your team learn from them. Engineering excellence. You have the technical strength and deep knowledge of the whole stack to give phenomenal architecture and implementation mentorship to the teams who will count on your experience. Mentoring. You know that the most important part of your job is setting the team up for success, through mentoring, teaching, and reviewing. Dedication. Cities never sleep, and neither does Uber. You care tremendously about keeping the Uber experience consistent for users and strive to make any issues invisible to riders. You are your harshest critic and hold yourself personally accountable, jumping in and taking ownership of problems that may not even be in your team s scope. Proficiency in Operating Systems, Linux, Virtual Machines, and Container (Docker/containerd), Cluster Management (Kubernetes, Mesos) technologies. Knowledge of Big Data (e.g. Spark, Ray) technologies. Ubers mission is to reimagine the way the world moves for the better. Here, bold ideas create real-world impact, challenges drive growth, and speed fuelds progress. What moves us, moves the world - let s move it forward, together. Offices continue to be central to collaboration and Ubers cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role.

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql

Posted 1 week ago

Apply

9.0 - 14.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Machine learning Engineer - Bengaluru, Karnataka Full job description : Predictive Research is a Machine Learning, Artificial Intelligence, Big Data, Data Science and Quant Analytics focused on companies having a good track record for the last 9 years. Our main focus is on Machine Learning, Artificial Intelligence, Neural Network,Big Data, Data Science and financial engineering caters to many clients in Machine Learning, Data Science and Quantitative Research. Responsibilities: Bachelors Degree in a Quantitative discipline . Experience: total work: 2 years (Preferred) . Benefits: Competitive compensation package. Mentorship from experienced engineers. Opportunity to work on challenging projects. Professional growth and skill development. Inclusive work environment. Health and wellness benefits. Flexible work arrangements. Apply Now

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 6 Lacs

Chennai

Work from Office

About the Role: We are looking for a highly skilled and motivated Data Engineer to join our clients team. You will play a key role in designing, implementing, and optimizing data architectures and pipelines to support scalable data solutions for our business. Qualifications: 3-5 years of experience in data engineering, with a focus on building and managing data pipelines. Strong proficiency in relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience in building data pipelines with data warehouses like Snowflake , Redshift Experience in processing unstructured data stored from S3 using Athena , Glue etc. Hands-on experience with Kafka for real-time data streaming and messaging. Solid understanding of ETL processes , data integration, and data pipeline optimization. Proficiency in programming languages like Python , Java , or Scala for data processing. Experience with Apache Spark for big data processing and analytics is an advantage Familiarity with cloud platforms like AWS , GCP , or Azure for data infrastructure is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Key Responsibilities: Design, build, and maintain efficient and scalable data pipelines to support data integration and transformation across various sources. Work with relational databases (e.g., MySQL, PostgreSQL, etc.) and NoSQL databases (e.g., MongoDB, Cassandra, etc.) to manage and optimize large datasets. Utilize Apache Spark for distributed data processing and real-time analytics. Implement and manage Kafka for data streaming and real-time data integration between systems.Collaborate with cross-functional teams to gather and translate business requirements into technical solutions. Monitor and optimize the performance of data pipelines and architectures, ensuring high availability and reliability. Ensure data quality, consistency, and integrity across all systems. Stay up-to-date with the latest trends and best practices in data engineering and big data technologies.

Posted 1 week ago

Apply

5.0 - 10.0 years

4 - 9 Lacs

Pune, Chennai, Bengaluru

Hybrid

We are seeking a highly skilled and experienced Senior Architect/Consultants to lead our Generative AI Technologies team. The ideal candidate will have a deep understanding of Generative AI, machine learning, and related technologies, along with a proven track record of architecting and implementing innovative solutions. As a Senior Architect/Consultant, you will play a pivotal role in shaping our Generative AI strategy, selecting appropriate models and technologies, and collaborating with cross-functional teams to deliver cutting-edge solutions that meet customer requirements and business objectives. Primary Skill Set: Generative AI Expertise: In-depth knowledge of various Generative AI techniques, including GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and other relevant architectures. Experience with both image and text generation is essential. Conversant with Gen AI development tools like Prompt engineering, Langchain, Semantic Kernels, Function calling. Exposure to both API based and opens source LLMs based solution design. Responsible AI: Should have proficient knowledge in Responsible AI and Data Privacy principles to ensure ethical data handling, transparency, and accountability in all stages of AI development. Must demonstrate a commitment to upholding privacy standards, mitigating bias, and fostering trust within data-driven initiatives. Machine Learning Mastery: Profound understanding of machine learning principles, algorithms, and frameworks. Able to design and implement models, optimize performance, and manage training pipelines effectively. Technical Proficiency: Proficiency in programming languages commonly used in AI development, such as Python, TensorFlow, PyTorch, or similar tools. Experience with cloud platforms (e.g., AWS, Azure, GCP) and distributed computing is advantageous. Architecture Design: Ability to design end-to-end Generative AI architectures that encompass data preprocessing, model selection, training pipelines, and deployment strategies. Strong grasp of scalable, reliable, and efficient system design. Secondary Skill Set: Domain Knowledge: Familiarity with the specific industry domain or vertical in which the Generative AI solutions will be applied (e.g., healthcare, finance, entertainment) is beneficial. This enables contextual understanding and tailored solution development. Data Engineering: Understanding of data engineering practices, data pipelines, and data management. Proficiency in data preprocessing, cleansing, and transformation for effective model training.

Posted 1 week ago

Apply

3.0 - 8.0 years

11 - 12 Lacs

Bengaluru

Work from Office

Req ID: 334744 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Databricks Developer to join our team in Bangalore, Karn taka (IN-KA), India (IN). Job Duties: Pushing data domains into a massive repository Building a large data lake; highly leveraging Databricks Minimum Skills Required: 3+ years of experience in a Data Engineer or Software Engineer role Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. Experience with data pipeline and workflow management tools Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases Understanding of Datawarehouse (DWH) systems, and migration from DWH to data lakes Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models including warehousing and analytic models Build processes supporting data transformation, data structures, metadata, dependency and workload management Working knowledge of message queuing, stream processing, and highly scalable big data data stores Experience supporting and working with cross-functional teams in a dynamic environment Preferred Qualifications Experience with Azure cloud services: ADLS, ADF, ADLA, AAS Minimum Skills Required: 2 years" About NTT DATA We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client s needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https: / / us.nttdata.com / en / contact-us . NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https: / / us.nttdata.com / en / contact-us .

Posted 1 week ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Coimbatore

Work from Office

Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: SCALA, Spark, Python, Data bricks Good to have: Java & Hadoop The Role: Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes. Interested candidates share your resume at Neesha1@damcogroup.com along with the below mentioned details : Total Exp : Relevant Exp in Scala & Spark : Current CTC: Expected CTC: Notice period : Current Location:

Posted 1 week ago

Apply

2.0 - 6.0 years

3 - 6 Lacs

Hyderabad, Pune

Work from Office

":" Core Responsibilities: Design and develop data pipelines and workflows within Palantir Foundry Build and manage Ontology objects to enable semantic reasoning over data Configure and maintain data integrations from various sources (JDBC, SFTP, APIs) Write clean, efficient code using Python, Java, Scala, and SQL Collaborate with data scientists, analysts, and business stakeholders to translate requirements into technical solutions Perform data quality checks and ensure reliability across Foundry applications Participate in code reviews, agile ceremonies, and contribute to best practices Troubleshoot and resolve issues in Foundry applications and pipelines Stay updated on Foundry features, AIP capabilities, and platform enhancements Requirements 36 years of experience in software development or data engineering 2+ years of hands-on experience with Palantir Foundry Strong understanding of data modeling, ETL processes, and data warehousing Proficiency in Python, Java, or Scala Experience with SQL and relational databases Familiarity with cloud platforms like AWS, Azure, or GCP Excellent problem-solving, communication, and collaboration skills Preferred Qualifications: Experience with Palantir Foundry/AIP and LLMs Knowledge of DevOps practices, ontology design, and access control Exposure to data visualization tools (e.g., Tableau, Power BI) Familiarity with big data technologies like Spark or Hadoop Palantir certifications (e.g., Foundry Developer, Data Engineer) Benefits " , "Work_Experience":"3-6years" , "Job_Type":"Full time" , "Job_Opening_Name":"Palantir Foundry/AIP Developer" , "State":"Maharashtra" , "Country":"India" , "Zip_Code":"411045" , "id":"86180000007419270" , "Publish":true , "Date_Opened":"2025-07-23" , "Keep_on_Career_Site":false}]);

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies