Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 3.0 years
0 - 0 Lacs
Pune, Maharashtra
On-site
Job Summary: The Snowflake Developer will be responsible for designing, developing, and maintaining data pipelines, data warehouses, and data models using Snowflake. They will collaborate with data analysts, data scientists, and business Client’s to ensure that the data architecture meets the needs of the organization. Key Responsibilities: Develop, design and maintain data pipelines, data warehouses, and data models in Snowflake Create and manage ETL processes to move data from various sources into Snowflake Ensure data quality, integrity, and consistency across all data sources and data models Work with client to identify requirements and design solutions that meet their needs Optimize Snowflake performance and troubleshoot issues as they arise Develop and maintain documentation of data architecture, data models, and ETL processes Stay up-to-date with Snowflake updates, new features, and best practices Participate in code reviews, testing, and debugging activities Collaborate with data analysts and data scientists to design and implement analytics solutions Qualifications: Bachelor's degree in Computer Science, Information Systems or related field Minimum of 4-7 years of experience in designing and developing data warehouses and data models using Snowflake Strong understanding of SQL and experience with database technologies such as Oracle, SQL Server, MySQL, etc. Knowledge of ETL tools and processes such as Informatica, Talend, etc. Experience with scripting languages such as Python, Perl, etc. Familiarity with data modeling tools such as ERwin, ER/Studio, etc. Strong problem-solving and analytical skills Excellent written and verbal communication skills Ability to work independently and in a team-oriented environment Preferred Qualifications: Experience with cloud technologies such as AWS, Azure, or Google Cloud Platform Certification in Snowflake or related technology Experience with Big Data technologies such as Hadoop, Spark, etc. Experience with data visualization tools such as Tableau, Power BI, etc. Familiarity with Agile development methodologies Note: This job description is not intended to be all-inclusive. The employee may perform other related duties as negotiated to meet the ongoing needs of the organization. Job Types: Full-time, Permanent Pay: ₹13,801.62 - ₹63,954.98 per month Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): What is your current annual CTC in INR Lacs? What is your notice period in terms of days? Experience: Snowflake Developer: 3 years (Required) Work Location: In person
Posted 1 week ago
3.0 years
15 - 20 Lacs
Madurai, Tamil Nadu
On-site
Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Engineer Location: Madurai Experience: 5+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Engineer, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 5+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Data Architect (Analytics) – AD Location: NCR (Preferably) Job Summary: Data Architect will be responsible for designing and managing the data architecture for data analytics projects. This role involves ensuring the integrity, availability, and security of data, as well as optimizing data systems to support business intelligence and analytics needs. Key Responsibilities Design and implement data architecture solutions to support data analytics and business intelligence initiatives. Collaborate with stakeholders to understand data requirements and translate them into technical specifications. Design and implement data systems and infrastructure setups, ensuring scalability, security, and performance. Develop and maintain data models, data flow diagrams, and data dictionaries. Ensure data quality, consistency, and security across all data sources and systems. Optimize data storage and retrieval processes to enhance performance and scalability. Evaluate and recommend data management tools and technologies. Provide guidance and support to data engineers and analysts on best practices for data architecture. Conduct assessments of data systems to identify areas for improvement and optimization. Understanding of Government of India data governance policies and regulatory requirements. Hands-on in troubleshooting complex technical problems in production environments Equal employment opportunity information KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you. Qualifications QUALIFICATIONS Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field (Master's degree preferred). Proven experience as a Data Architect or in a similar role, with a focus on data analytics projects. Strong knowledge of data architecture frameworks and methodologies. Proficiency in database management systems (e.g., SQL, NoSQL), data warehousing, and ETL processes. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure, Google Cloud). Certification in data architecture or related fields.
Posted 1 week ago
5.0 - 8.0 years
12 - 18 Lacs
Bengaluru
Work from Office
• Bachelor's degree in computer science, Information Technology, or a related field. • 3-5 years of experience in ETL development and data integration. • Proficiency in SQL and experience with relational databases such as Oracle, SQL Server, or MySQL. • Familiarity with data warehousing concepts and methodologies. • Hands-on experience with ETL tools like Informatica, Talend, SSIS, or similar. • Knowledge of data modeling and data governance best practices. • Strong analytical skills and attention to detail. • Excellent communication and teamwork skills. • Experience with Snowflake or willingness to learn and implement Snowflake-based solutions. • Experience with Big Data technologies such as Hadoop or Spark. • Knowledge of cloud platforms like AWS, Azure, or Google Cloud and their ETL services. • Familiarity with data visualization tools such as Tableau or Power BI. • Hands-on experience with Snowflake for data warehousing and analytics
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Overview of Role As a Data Engineer specializing in AI/ML, you'll be instrumental in designing, building, and maintaining the data infrastructure crucial for training, deploying, and serving our advanced AI and Machine Learning models. You'll work closely with Data Scientists, ML Engineers, and Cloud Architects to ensure data is accessible, reliable, and optimized for high-performance AI/ML workloads, primarily within the Google Cloud ecosystem. Responsibilities Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines to ingest, transform, and load data from various sources into data lakes and data warehouses, specifically optimized for AI/ML consumption. AI/ML Data Infrastructure: Architect and implement the underlying data infrastructure required for machine learning model training, serving, and monitoring within GCP environments. Google Cloud Ecosystem: Leverage a broad range of Google Cloud Platform (GCP) data services including, BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, Vertex AI, Composer (Airflow), and Cloud SQL. Data Quality & Governance: Implement best practices for data quality, data governance, data lineage, and data security to ensure the reliability and integrity of AI/ML datasets. Performance Optimization: Optimize data pipelines and storage solutions for performance, cost-efficiency, and scalability, particularly for large-scale AI/ML data processing. Collaboration with AI/ML Teams: Work closely with Data Scientists and ML Engineers to understand their data needs, prepare datasets for model training, and assist in deploying models into production. Automation & MLOps Support: Contribute to the automation of data pipelines and support MLOps initiatives, ensuring seamless integration from data ingestion to model deployment and monitoring. Troubleshooting & Support: Troubleshoot and resolve data-related issues within the AI/ML ecosystem, ensuring data availability and pipeline health. Documentation: Create and maintain comprehensive documentation for data architectures, pipelines, and data models. Qualifications 1-2+ years of experience in Data Engineering, with at least 2-3 years directly focused on building data pipelines for AI/ML workloads. Deep, hands-on experience with core GCP data services such as BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer/Airflow. Strong proficiency in at least one relevant programming language for data engineering (Python is highly preferred).SQL skills for complex data manipulation, querying, and optimization. Solid understanding of data warehousing concepts, data modeling (dimensional, 3NF), and schema design for analytical and AI/ML purposes. Proven experience designing, building, and optimizing large-scale ETL/ELT processes. Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop) and concepts. Exceptional analytical and problem-solving skills, with the ability to design solutions for complex data challenges. Excellent verbal and written communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform? Are you an expert with Big Data Technologies? Have you looked under the hood of these systems? Are you interested in Open Source? If you answered “Yes” to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools,techniques, and products to translate system requirements into the design anddevelopment of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation for Java, Springboot, API, Microservices, Security Preferred Technical And Professional Experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python.
Posted 1 week ago
6.0 - 7.0 years
15 - 17 Lacs
India
On-site
About The Opportunity This role is within the fast-paced enterprise technology and data engineering sector, delivering high-impact solutions in cloud computing, big data, and advanced analytics. We design, build, and optimize robust data platforms powering AI, BI, and digital products for leading Fortune 500 clients across industries such as finance, retail, and healthcare. As a Senior Data Engineer, you will play a key role in shaping scalable, production-grade data solutions with modern cloud and data technologies. Role & Responsibilities Architect and Develop Data Pipelines: Design and implement end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark, and cloud object storage. Data Warehouse & Data Mart Design: Create scalable data warehouses/marts that empower self-service analytics and machine learning workloads. Database Modeling & Optimization: Translate logical models into efficient physical schemas, ensuring optimal partitioning and performance management. ETL/ELT Workflow Automation: Build, automate, and monitor robust data ingestion and transformation processes with best practices in reliability and observability. Performance Tuning: Optimize Spark jobs and SQL queries through careful tuning of configurations, indexing strategies, and resource management. Mentorship and Continuous Improvement: Provide production support, mentor team members, and champion best practices in data engineering and DevOps methodology. Skills & Qualifications Must-Have 6-7 years of hands-on experience building production-grade data platforms, including at least 3 years with Apache Spark/Databricks. Expert proficiency in PySpark, Python, and advanced SQL with a record of performance tuning distributed jobs. Proven expertise in data modeling, data warehouse/mart design, and managing ETL/ELT pipelines using tools like Airflow or dbt. Hands-on experience with major cloud platforms such as AWS or Azure, and familiarity with modern lakehouse/data-lake patterns. Strong analytical, problem-solving, and mentoring skills with a DevOps mindset and commitment to code quality. Preferred Experience with AWS analytics services (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Exposure to streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Familiarity with ML feature stores, MLOps workflows, or data governance frameworks. Relevant certifications (Databricks, AWS, Azure) or active contributions to open source projects. Location: India | Employment Type: Fulltime Skills: agile methodologies,team leadership,performance tuning,sql,elt,airflow,aws,data modeling,apache spark,pyspark,data,hadoop,databricks,python,dbt,big data technologies,etl,azure
Posted 1 week ago
7.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–7 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,aws,data,sql,agile methodologies,performance tuning,elt,airflow,apache spark,pyspark,hadoop,databricks,python,dbt,etl,azure
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Responsibilities Include, But Not Limited To Strong desire to grow a career as a Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Experience in the areas: statistical modeling, feature extraction and analysis, supervised/unsupervised/semi-supervised learning. Exposure to the semiconductor industry is a plus but not a requirement. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Strong software development skills. Strong verbal and written communication skills. Experience with or desire to learn: Machine learning and other advanced analytical methods Fluency in Python and/or R pySpark and/or SparkR and/or SparklyR Hadoop (Hive, Spark, HBase) Teradata and/or another SQL databases Tensorflow, and/or other statistical software including scripting capability for automating analyses SSIS, ETL Javascript, AngularJS 2.0, Tableau Experience working with time-series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Experience working with Manufacturing Execution Systems (MES) is a plus Existing papers from CVPR, NIPS, ICML, KDD, and other key conferences are plus, but this is not a research position About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 1 week ago
3.0 years
4 Lacs
Delhi
On-site
Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person
Posted 1 week ago
5.0 - 9.0 years
3 - 9 Lacs
No locations specified
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Associate IS Architect What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to deliver actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Standup and enhance BI reporting capabilities through Cognos, PowerBI or similar tools. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementatio What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree with 5- 9 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experience in BI reporting tools such as Cognos, PowerBI and/or Tableau Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role Refer to responsibilities You will be responsible for Job Summary: Build solutions for the real-world problems in workforce management for retail. You will work with a team of highly skilled developers and product managers throughout the entire software development life cycle of the products we own. In this role you will be responsible for designing, building, and maintaining our big data pipelines. Your primary focus will be on developing data pipelines using available tec hnologies. In this job, I’m accountable for: Following our Business Code of Conduct and always acting with integrity and due diligence and have these specific risk responsibilities: -Represent Talent Acquisition in all forums/ seminars pertaining to process, compliance and audit -Perform other miscellaneous duties as required by management -Driving CI culture, implementing CI projects and innovation for withing the team -Design and implement scalable and reliable data processing pipelines using Spark/Scala/Python &Hadoop ecosystem. -Develop and maintain ETL processes to load data into our big data platform. -Optimize Spark jobs and queries to improve performance and reduce processing time. -Working with product teams to communicate and translate needs into technical requirements. -Design and develop monitoring tools and processes to ensure data quality and availability. -Collaborate with other teams to integrate data processing pipelines into larger systems. -Delivering high quality code and solutions, bringing solutions into production. -Performing code reviews to optimise technical performance of data pipelines. -Continually look for how we can evolve and improve our technology, processes, and practices. -Leading group discussions on system design and architecture. -Manage and coach individuals, providing regular feedback and career development support aligned with business goals. -Allocate and oversee team workload effectively, ensuring timely and high-quality outputs. -Define and streamline team workflows, ensuring consistent adherence to SLAs and data governance practices. -Monitor and report key performance indicators (KPIs) to drive continuous improvement in delivery efficiency and system uptime. -Oversee resource allocation and prioritization, aligning team capacity with project and business demands. Key people and teams I work with in and outside of Tesco: People, budgets and other resources I am accountable for in my job: TBS & Tesco Senior Management TBS Reporting Team Tesco UK / ROI/ Central Europe Any other accountabilities by the business Business stakeholders Operational skills relevant for this job: Experience relevant for this job: Skills: ETL, YARN,Spark, Hive,Hadoop,PySpark/Python • 7+ years of experience inbuilding and maintaining big data (anyone) Linux/Unix/Shell environments(anyone), Query platforms using Spark/Scala. optimisation • Strong knowledge of distributed computing principles and big Good to have: Kafka, restAPI/reporting tools. data technologies such as Hadoop, Spark, Streaming etc. • Experience with ETL processes and data modelling. • Problem-solving and troubleshooting skills. • Working knowledge on Oozie/Airflow. • Experience in writing unit test cases, shell scripting. • Ability to work independently and as part of a team in a fast-paced environment. You will need Refer to responsibilities Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation.
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Senior Software Engineer Experience: 10+ Years Top Skills: Java, Spring, Scala, AWS, Spark, SQL Work Mode: Hybrid - 3 days from the office Work Location: Marathahalli, Bangalore. Employer: Global Product Company - Established 1969 Why Join Us? Be part of a global product company with over 50 years of innovation. Work in a collaborative and growth-oriented environment. Help shape the future of digital products in a rapidly evolving industry. Required Job Skills and Abilities: 10+ years’ experience in designing and developing enterprise level software solutions 3 years’ experience developing Scala / Java applications and microservices using Spring Boot 7 years’ experience with large volume data processing and big data tools such as Apache Spark, SQL, Scala, and Hadoop technologies 5 years’ experience with SQL and Relational databases 2 year Experience working with the Agile/Scrum methodology
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
The healthcare industry presents a significant opportunity for software development, and Health Catalyst stands out as a leading company in this domain. By joining our team, you have the chance to contribute to solving critical healthcare challenges at a national level, impacting the lives of millions. At Health Catalyst, we value individuals who are intelligent, hardworking, and humble, and we are committed to developing innovative tools to enhance healthcare performance, cost-efficiency, and quality. As a Data Engineer at Health Catalyst, your primary focus will be on acquiring data from various sources within a Health Systems ecosystem. Leveraging Catalyst's Data Operating System, you will work closely with both technical and business aspects of the source systems, utilizing multiple technologies to extract the necessary data. Key Responsibilities include: - Proficiency in Structured Query Language (SQL) and experience with EMR/EHR systems - Leading the design, development, and maintenance of scalable data pipelines and ETL processes - Strong expertise in ETL tools and database principles - Excellent analytical and troubleshooting skills, with a strong customer service orientation - Mentoring and guiding a team of data engineers to foster continuous learning and improvement - Monitoring and resolving data infrastructure issues to ensure high availability and performance - Ensuring data quality, integrity, and security across all data platforms - Implementing best practices for data governance, lineage, and compliance Desired Skills: - Experience with RDBMS (SQL Server, Oracle, etc.) and Stored Procedure/T-SQL/SSIS - Familiarity with processing HL7 messages, CCD documents, and EDI X12 Claims files - Knowledge of Agile development methodologies and the ability to work with technologies related to data acquisition - Proficiency in Hadoop and other Big Data Technologies - Experience with Microsoft Azure cloud solutions, architecture, and related technologies Education & Experience: - Bachelor's degree in technology, business, or a healthcare-related field - Minimum of 5 years of experience in data engineering, with at least 2 years in a leadership role - 2+ years of experience in the healthcare/technology industry If you are passionate about leveraging your expertise in data engineering to make a meaningful impact in the healthcare sector, we encourage you to apply and be a part of our dynamic and innovative team at Health Catalyst.,
Posted 1 week ago
9.0 - 13.0 years
0 Lacs
haryana
On-site
The role of a Data Scientist, Risk Data Analytics at Fidelity International involves taking a leading role in developing Data Science and Advanced Analytics solutions for the business. This includes engaging with key stakeholders in the Global Risk Team to understand various subject areas such as Investment Risk, Non-Financial Risk, Enterprise Risk, Model Risk, and Enterprise Resilience. The Data Scientist will implement advanced analytics solutions on On-Premises/Cloud platforms, develop proof-of-concepts, and collaborate with internal and external teams to progress these concepts to production. Additionally, they will work on maximizing the adoption of Cloud Based advanced analytics solutions by building sandbox analytics environments and supporting delivered models and infrastructure on AWS. The Data Scientist will be responsible for developing and delivering Data Science solutions for the business, partnering with internal and external ecosystem to design and deliver advanced analytics-enabled solutions. They will create advanced analytics solutions on quantitative and text data using Artificial Intelligence, Machine Learning, and NLP techniques, as well as compelling visualizations for customer benefit. Stakeholder management is a key aspect of the role, involving working with Risk SMEs/Managers, stakeholders, and sponsors to understand business problems and translate them into appropriate analytics solutions. The Data Scientist will engage with key stakeholders for the smooth execution, delivery, implementation, and maintenance of solutions. Moreover, the Data Scientist will focus on the adoption of Cloud-enabled Data Science solutions by maximizing adoption of Cloud Based advanced analytics solutions, building sandbox analytics environments, and deploying solutions in production while adhering to best practices. Collaboration and Ownership are essential, which includes sharing knowledge and best practices with the team, providing mentoring, coaching, and consulting advice to staff, and taking complete independent ownership of projects and initiatives in the team with minimal support. The ideal candidate for this role should have a strong educational background with qualifications like an Engineer from IIT/Masters in a field related to Data Science/Economics/Mathematics/M.B.A from tier 1 institutions. They should have a minimum of 9 years of experience in Data Science and Analytics, with hands-on experience in Statistical Modelling, Machine Learning Techniques, Natural Language Processing, Deep Learning, and Python. The candidate should possess excellent problem-solving skills, the ability to run analytics applications, interpret statistical results, and implement models with clear measurable outcomes. Additionally, experience with SPARK/Hadoop/Big Data Platforms, unstructured data, big data, and primary market research is beneficial. Fidelity International offers a comprehensive benefits package, values employee wellbeing, supports development, and promotes flexible working arrangements. The organization is committed to making employees feel motivated by their work and happy to be part of the team. If you are looking to build your future in a dynamic and innovative environment, consider joining Fidelity International's Data Value team. Visit careers.fidelityinternational.com for more information on opportunities to be a part of the team.,
Posted 1 week ago
18.0 - 22.0 years
0 Lacs
chennai, tamil nadu
On-site
As the General Manager/Director of Engineering for India Operations at our organization, your primary responsibility will be overseeing the product engineering, quality assurance, and product support functions. You will play a crucial role in ensuring the delivery of high-quality software products that meet SLAs, are delivered on time, and are within budget. Collaborating with the CTO and other team members, you will help develop a long-term product plan for client products and manage the release planning cycles for all products. A key aspect of your role will involve resource management and ensuring that each product team has the necessary skilled resources to meet deliverables. You will also be responsible for developing and managing a skills escalation and promotion path for the product engineering organization, as well as implementing tools and processes to optimize product engineering throughput and quality. Key Result Areas (KRAs) for this role include working effectively across multiple levels in the organization and in a global setting, ensuring key milestones are met, delivering high-quality solutions, meeting project timelines and SLAs, maintaining customer satisfaction, ensuring controlled releases to production, and aligning personnel with tasks effectively. Additionally, you will need to have a deep understanding of our products, their interrelationships, and relevance to the business to ensure their availability and stability. To qualify for this role, you should have a Bachelor's degree in Computer Science/Engineering from premier institutes, with an MBA being preferred. You should have at least 18 years of software development experience, including 10+ years in a managerial capacity. Strong knowledge of the software development process, hands-on implementation experience, leadership experience in an early-stage start-up, familiarity with mobile technologies, and professional experience with interactive languages and technologies such as FLEX, PHP, HTML5, MYSQL, and MONGODB are desired. Experience with Agile Methodology and on-site experience working in the US would be advantageous. In summary, as the General Manager/Director of Engineering for India Operations, you will be instrumental in driving the success of our product engineering efforts, ensuring high-quality deliverables, and optimizing processes to meet business objectives effectively. If you are interested in this exciting opportunity, please reach out to us at jobs@augustainfotech.com. (Note: This Job Description is a standard summary and should be written in second person format without any headers),
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview We are looking for an experienced Solution Architect AI/ML & Data Engineering to lead the design and delivery of advanced data and AI/ML solutions for our clients. Responsibilities The ideal candidate will have a strong background in end-to-end data architecture, AI lifecycle management, cloud technologies, and emerging Generative AI Responsibilities : Collaborate with clients to understand business requirements and design robust data solutions. Lead the development of end-to-end data pipelines including ingestion, storage, processing, and visualization. Architect scalable, secure, and compliant data systems following industry best practices. Guide data engineers, analysts, and cross-functional teams to ensure timely delivery of solutions. Participate in pre-sales efforts: solution design, proposal creation, and client presentations. Act as a technical liaison between clients and internal teams throughout the project lifecycle. Stay current with emerging technologies in AI/ML, data platforms, and cloud services. Foster long-term client relationships and identify opportunities for business expansion. Understand and architect across the full AI lifecyclefrom ingestion to inference and operations. Provide hands-on guidance for containerization and deployment using Kubernetes. Ensure proper implementation of data governance, modeling, and warehousing : Bachelors or masters degree in computer science, Data Science, or related field. 10+ years of experience as a Data Solution Architect or similar role. Deep technical expertise in data architecture, engineering, and AI/ML systems. Strong experience with Hadoop-based platforms, ideally Cloudera Data Platform or Data Fabric. Proven pre-sales experience: technical presentations, solutioning, and RFP support. Proficiency in cloud platforms (Azure preferred; also, AWS or GCP) and cloud-native data tools. Exposure to Generative AI frameworks and LLMs like OpenAI and Hugging Face. Experience in deploying and managing applications on Kubernetes (AKS, EKS, GKE). Familiarity with data governance, data modeling, and large-scale data warehousing. Excellent problem-solving, communication, and client-facing & Technology Architecture & Engineering: Hadoop Ecosystem: Cloudera Data Platform, Data Fabric, HDFS, Hive, Spark, HBase, Oozie. ETL & Integration: Apache NiFi, Talend, Informatica, Azure Data Factory, AWS Glue. Warehousing: Azure Synapse, Redshift, BigQuery, Snowflake, Teradata, Vertica. Streaming: Apache Kafka, Azure Event Hubs, AWS Platforms: Azure (preferred), AWS, GCP. Data Lakes: ADLS, AWS S3, Google Cloud Platforms: Data Fabric, AI Essentials, Unified Analytics, MLDM, MLDE. AI/ML & GenAI Lifecycle Tools: MLflow, Kubeflow, Azure ML, SageMaker, Ray. Inference: TensorFlow Serving, KServe, Seldon. Generative AI: Hugging Face, LangChain, OpenAI API (GPT-4, etc. DevOps & Deployment Kubernetes: AKS, EKS, GKE, Open Source K8s, Helm. CI/CD: Jenkins, GitHub Actions, GitLab CI, Azure DevOps. (ref:hirist.tech)
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining Atgeir Solutions, a leading innovator in technology, renowned for its commitment to excellence. As a Technical Lead specializing in Big Data and Cloud technologies, you will have the opportunity for advancement to the role of Technical Architect. Your responsibilities will include leveraging your expertise in Big Data and Cloud technologies to contribute to the design, development, and implementation of complex systems. You will lead and inspire a team of professionals, offering technical guidance and mentorship to foster a collaborative and innovative work environment. In addition, you will be tasked with solving intricate technical challenges and guiding your team in overcoming obstacles in Big Data and Cloud environments. Investing in the growth and development of your team members will be crucial, including identifying training needs, organizing knowledge-sharing sessions, and promoting a culture of continuous learning. Collaboration with stakeholders, such as clients, architects, and other leads, will be essential to understand requirements and align technology strategies with business goals, particularly in the realm of Big Data and Cloud. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with 7-10 years of experience in software development. A proven track record of technical leadership in Big Data and Cloud environments is required. Proficiency in technologies like Hadoop, Spark, GCP, AWS, and Azure is essential, with knowledge of Databricks/Snowflake considered an advantage. Strong communication and interpersonal skills are necessary to convey technical concepts to various stakeholders effectively. Upon successful tenure as a Technical Lead, you will have the opportunity to progress into the role of Technical Architect. This advancement will entail additional responsibilities related to system architecture, design, and strategic technical decision-making, with a continued focus on Big Data and Cloud technologies.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As an ideal candidate for this role, you will be responsible for designing and architecting scalable Big Data solutions within the Hadoop ecosystem. Your key duties will include leading architecture-level discussions for data platforms and analytics systems, constructing and optimizing data pipelines utilizing PySpark and other distributed computing tools, and transforming business requirements into scalable data models and integration workflows. It will be crucial for you to guarantee the high performance and availability of enterprise-grade data processing systems. Additionally, you will play a vital role in mentoring development teams and offering guidance on best practices and performance tuning. Your must-have skills for this position include architect-level experience with the Big Data ecosystem and enterprise data solutions, proficiency in Hadoop, PySpark, and distributed data processing frameworks, as well as hands-on experience in SQL and data warehousing concepts. A deep understanding of data lake architecture, data ingestion, ETL, and orchestration tools, along with experience in performance optimization and large-scale data handling will be essential. Your problem-solving, design, and analytical skills should be excellent. While not mandatory, it would be beneficial if you have exposure to cloud platforms such as AWS, Azure, or GCP for data solutions, and possess knowledge of data governance, data security, and metadata management. Joining our team will provide you with the opportunity to work on cutting-edge Big Data technologies, gain leadership exposure, and be directly involved in architectural decisions. This role offers stability as a full-time position within a top-tier tech team, ensuring a work-life balance with a 5-day working schedule. (ref:hirist.tech),
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
You will be working as a Lead Data Engineer & Architect at Phonologies, a company that specializes in managing telephony infrastructure for contact center applications and chatbots. Phonologies" platform is utilized by leading pharmacy chains, Fortune 500 companies, and North America's largest carrier to automate voice-based customer support queries, enhancing customer interactions and operational efficiencies. The company is headquartered in India and operates on a global scale. As the Lead Data Engineer & Architect, your role will involve designing and implementing data architecture, creating and maintaining data pipelines, performing data analysis, and collaborating with different teams to enhance data-driven decision-making processes. You will also be tasked with leading data engineering projects, ensuring data quality and security, and leveraging your expertise to drive successful outcomes. To be successful in this role, you should possess at least 10 years of experience in enterprise data engineering and architecture. You must be proficient in ETL processes, orchestration, and streaming pipelines, with a strong skill set in technologies such as Hadoop, Spark, Azure, Kafka, and Kubernetes. Additionally, you should have a track record of building MLOps and AutoML-ready production pipelines and delivering solutions for telecom, banking, and public sector industries. Your ability to lead cross-functional teams with a focus on client satisfaction will be crucial, as well as your certifications in data platforms and AI leadership. If you are passionate about data engineering, architecture, and driving innovation in a dynamic environment, this role at Phonologies may be the perfect opportunity for you. Join our team in Pune and contribute to our mission of revolutionizing customer support through cutting-edge technology solutions.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As a Senior Specialist in Software Development (Artificial Intelligence) at Accelya, you will lead the design, development, and implementation of AI and machine learning solutions to tackle complex business challenges. Your expertise in AI algorithms, model development, and software engineering best practices will be crucial in working with cross-functional teams to deliver intelligent systems that optimize business operations and decision-making. Your responsibilities will include designing and developing AI-driven applications and platforms using machine learning, deep learning, and NLP techniques. You will lead the implementation of advanced algorithms for supervised and unsupervised learning, reinforcement learning, and computer vision. Additionally, you will develop scalable AI models, integrate them into software applications, and build APIs and microservices for deployment in cloud environments or on-premise systems. Collaboration with data scientists and data engineers will be essential in gathering, preprocessing, and analyzing large datasets. You will also implement feature engineering techniques to enhance the accuracy and performance of machine learning models. Regular evaluation of AI models using performance metrics and fine-tuning them for optimal accuracy will be part of your role. Furthermore, you will collaborate with business stakeholders to identify AI adoption opportunities, provide technical leadership and mentorship to junior team members, and stay updated with the latest AI trends and research to introduce innovative techniques to the team. Ensuring ethical compliance, security, and continuous improvement of AI systems will also be key aspects of your role. You should hold a Bachelor's degree in Computer Science, Data Science, Artificial Intelligence, or a related field, along with at least 5 years of experience in software development focusing on AI and machine learning. Proficiency in AI frameworks and libraries, programming languages such as Python, R, or Java, and cloud platforms for deploying AI models is required. Familiarity with Agile methodologies, data structures, and databases is essential. Preferred qualifications include a Master's or PhD in Artificial Intelligence or Machine Learning, experience with NLP techniques and computer vision technologies, and certifications in AI/ML or cloud platforms. Accelya is looking for individuals who are passionate about shaping the future of the air transport industry through innovative AI solutions. If you are ready to contribute your expertise and drive continuous improvement in AI systems, this role offers you the opportunity to make a significant impact in the industry.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
Join GlobalLogic as a valuable member of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. Your engagement will involve contributing to the development of end-user modules" firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and analyzing and estimating customer requirements. Requirements - BA / BS degree in Computer Science, Mathematics, or a related technical field, or equivalent practical experience. - Proficiency in Cloud SQL and Cloud Bigtable. - Experience with Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub / Sub, and Genomics. - Familiarity with Google Transfer Appliance, Cloud Storage Transfer Service, and BigQuery Data Transfer. - Knowledge of data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and data processing algorithms (MapReduce, Flume). - Previous experience working with technical customers. - Proficiency in writing software in languages like Java or Python. - 6-10 years of relevant consulting, industry, or technology experience. - Strong problem-solving and troubleshooting skills. - Excellent communication skills. Job Responsibilities - Hands-on experience working with data warehouses, including technical architectures, infrastructure components, ETL / ELT, and reporting / analytic tools. - Experience in technical consulting. - Proficiency in architecting and developing software or internet-scale Big Data solutions in virtualized environments like Google Cloud Platform (mandatory) and AWS / Azure (good to have). - Familiarity with big data, information retrieval, data mining, machine learning, and building high availability applications with modern web technologies. - Working knowledge of ITIL and / or agile methodologies. - Google Data Engineer certification. What We Offer - Culture of caring: Prioritize a culture of caring, where people come first, fostering an inclusive environment of acceptance and belonging. - Learning and development: Commitment to continuous learning and growth, offering various programs, training curricula, and hands-on opportunities for personal and professional advancement. - Interesting & meaningful work: Engage in impactful projects that allow for creative problem-solving and exploration of new solutions. - Balance and flexibility: Embrace work-life balance with diverse career areas, roles, and work arrangements to support personal well-being. - High-trust organization: Join a high-trust organization with a focus on integrity, trustworthiness, and ethical practices. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with forward-thinking companies to create innovative digital products and experiences. Join the team in transforming businesses and industries through intelligent products, platforms, and services, contributing to cutting-edge solutions that shape the world today.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
The ideal candidate for the Big Data Engineer role should have 3-6 years of experience and be located in Hyderabad. You should possess strong skills in Spark, Python/Scala, AWS/Azure, Snowflake, Databricks, SQL Server/NoSQL. As a Big Data Engineer, your main responsibilities will include designing and implementing data pipelines for both batch and real-time data processing. You will need to optimize data storage solutions for efficiency and scalability, collaborate with analysts and business teams to meet data requirements, monitor data pipeline performance, and troubleshoot any issues that arise. It is crucial to ensure compliance with data security and privacy policies. The required skills for this role include proficiency in Python, SQL, and ETL frameworks, experience with big data tools such as Spark and Hadoop, a strong knowledge of cloud services and databases, as well as familiarity with data modeling and warehousing concepts.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Data Pipeline Architect at our company, you will be responsible for designing, developing, and maintaining optimal data pipeline architecture. You will monitor incidents, perform root cause analysis, and implement appropriate actions to ensure smooth operations. Additionally, you will troubleshoot issues related to abnormal job execution and data corruption, and automate jobs, notifications, and reports for efficiency. Your role will also involve optimizing existing queries, reverse engineering for data research and analysis, and calculating the impact of issues on downstream processes for effective communication. You will support failures, address data quality issues, and ensure the overall health of the environment. Maintaining ingestion and pipeline runbooks, portfolio summaries, and DBAR will be part of your responsibilities. Furthermore, you will enable infrastructure changes, enhancements, and updates roadmap, and build the infrastructure for optimal extraction, transformation, and loading of data from various sources using big data technologies, python, or Web-based APIs. Conducting and participating in code reviews with peers, ensuring effective communication, and understanding requirements will be essential in this role. To qualify for this position, you should hold a Bachelor's degree in Engineering/Computer Science or a related quantitative field. You must have a minimum of 8 years of programming experience with python and SQL, as well as hands-on experience with GCP, BigQuery, Dataflow, Data Warehousing, Apache Beam, and Cloud Storage. Experience with massively parallel processing systems like Spark or Hadoop, source code control systems (GIT), and CI/CD processes is required. Involvement in designing, prototyping, and delivering software solutions within the big data ecosystem, developing generative AI models, and ensuring code quality through reviews are key aspects of this role. Experience with Agile development methodologies, improving data governance and quality, and increasing data reliability are also important. Joining our team at EXL Analytics offers you the opportunity to work in a dynamic and innovative environment alongside experienced professionals. You will gain insights into various business domains, develop teamwork and time-management skills, and receive training in analytics tools and techniques. Our mentoring program and growth opportunities ensure that you have the support and guidance needed to excel in your career. Sky is the limit for our team members, and the experiences gained at EXL Analytics pave the way for personal and professional development within our company and beyond.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15459 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France