Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 8.0 years
4 - 8 Lacs
Chennai
Work from Office
Your Profile As a senior software engineer with Capgemini, you will have 3 + years of experience in Scala with strong project track record Hands On experience in Scala/Spark developer Hands on SQL writing skills on RDBMS (DB2) databases Experience in working with different file formats like JSON, Parquet, AVRO, ORC and XML. Must have worked in a HDFS platform development project. Proficiency in data analysis, data profiling, and data lineage Strong oral and written communication skills Experience working in Agile projects. Your Role Work on Hadoop, Spark, Hive &SQL query Ability to perform code optimization for performance, Scalability and configurability Data application development at scale in the Hadoop ecosystem. What youll love about working here ChoosingCapgeminimeans having the opportunity to make a difference, whetherfor the worlds leading businesses or for society. It means getting the support youneed to shape your career in the way that works for you. It means when the futuredoesnt look as bright as youd like, youhave the opportunity tomake changetorewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensiveLearning & Developmentprograms. With us, you will experience aninclusive, safe, healthy, andflexiblework environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in ourCorporate Social ResponsibilityandSustainabilityinitiatives. And whilst you make a difference, you will also have a lot offun. About Company
Posted 1 week ago
3.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Your Role Strong Spark programming experience with Java Good knowledge of SQL query writing and shell scripting Experience working in Agile mode Analyze, Design, develop, deploy and operate high-performant and high-quality services that serve users in a cloud environment. Good understanding of client eco system and expectations In charge of code reviews, integration process, test organization, quality of delivery Take part in development. Experienced into writing queries using SQL commands. Experienced with deploying and operating the codes in the cloud environment. Experienced in working without much supervision. Your Profile Primary Skill Java, Spark, SQL Secondary Skill/Good to have Hadoop or any cloud technology, Kafka, or BO. What youll love about working hereShort Description Choosing Capgemini means having the opportunity to make a difference, whether for the worlds leading businesses or for society. It means getting the support you need to shape your career in the way that works for you. It means when the future doesnt look as bright as youd like, you have the opportunity to make changeto rewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensive Learning & Development programs. With us, you will experience an inclusive, safe, healthy, and flexible work environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in our Corporate Social Responsibility and Sustainability initiatives. And whilst you make a difference, you will also have a lot of fun. About Capgemini
Posted 1 week ago
6.0 - 10.0 years
3 - 8 Lacs
Pune
Work from Office
Roles & Responsibilities: Oracle Warehouse Builder, OWB, Oracle Workflow Builder, Oracle TBSS Oracle Warehouse Builder 9i (Client Version 9.0.2.62.3/Repository Version 9.0.2.0.0) Oracle Warehouse Builder 4 Oracle Workflow Builder 2.6.2 Oracle Database 10gTNS for IBM/AIX RISC System/6000Version 10.2.0.5.0 - Production More than 5 years experience on Oracle Warehouse Builder (OWB) and Oracle Workflow Builder Expert Knowledge on Oracle PL/SQL to develop individual code objects to entire DataMarts. Scheduling tools Oracle TBSS (DBMS_SCHEDULER jobs to create and run) and trigger based for file sources based on control files. Must have design and development experience in data pipeline solutions from different source systems (FILES, Oracle) to data lakes. Must have involved in creating/designing Hive tables and loading analyzing data using hive queries. Must have knowledge in CA Workload Automation DE 12.2 to create jobs and scheduling. Extensive knowledge on entire life cycle of Change/Incident/Problem management by using ServiceNow. Oracle Warehouse Builder 9i (Client Version 9.0.2.62.3/Repository Version 9.0.2.0.0). Oracle Warehouse Builder 4 Oracle Workflow Builder 2.6.2 Oracle Database 10gTNS for IBM/AIX RISC System/6000Version 10.2.0.5.0 - Production. Oracle Enterprise Manager 10gR1.(Monitoring jobs and tablespaces utilization) Extensive knowledge in fetching Mainframe Cobol files(ASCII AND EBSDIC formats) to the landing area and processing(formatting) and loading(Error handling) of these files to oracle tables by using SQL*Loader and External tables. Extensive knowledge in Oracle Forms 6 to integrate with OWB 4. Extensive knowledge on entire life cycle of Change/Incident/Problem management by using Service-Now. work closely with the Business owner teams and Functional/Data analysts in the entire development/BAU process. Work closely with AIX support, DBA support teams for access privileges and storage issues etc. work closely with the Batch Operations team and MFT teams for file transfer issues. Migration of Oracle to Hadoop eco system: Must have working experience in Hadoop eco system elements like HDFS,MapReduce,YARN etc. Must have working knowledge on Scala & Spark Dataframes to convert the existing code to Hadoop data lakes. Must have design and development experience in data pipeline solutions from different source systems (FILES, Oracle) to data lakes. Must have involved in creating/designing Hive tables and loading analyzing data using hive queries. Must have knowledge in creating Hive partitions, Dynamic partitions and buckets. Must have knowledge in CA Workload Automation DE 12.2 to create jobs and scheduling. Use Denodo for Data virtualization to the required data access for end users.
Posted 1 week ago
12.0 - 15.0 years
13 - 17 Lacs
Mumbai
Work from Office
12+ Years experience in Big data Space across Architecture, Design, Development, testing & Deployment, full understanding in SDLC. 1. Experience of Hadoop and related technology stack experience 2. Experience of the Hadoop Eco-system(HDP+CDP) / Big Data (especially HIVE) Hand on experience with programming languages such as Java/Scala/python Hand-on experience/knowledge on Spark3. Being responsible and focusing on uptime and reliable running of all or ingestion/ETL jobs4. Good SQL and used to work in a Unix/Linux environment is a must.5. Create and maintain optimal data pipeline architecture.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.7. Good to have cloud experience8. Good to have experience for Hadoop integration with data visualization tools like PowerBI. Location Mumbai, Pune, Chennai, Hyderabad, Coimbatore, Kolkata
Posted 1 week ago
3.0 - 7.0 years
10 - 14 Lacs
Pune
Work from Office
Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python
Posted 1 week ago
3.0 - 7.0 years
10 - 14 Lacs
Chennai
Work from Office
As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Core Java, Spring Boot, Java2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred technical and professional experience None
Posted 1 week ago
6.0 - 11.0 years
19 - 27 Lacs
Haryana
Work from Office
About Company Job Description Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience
Posted 1 week ago
6.0 - 11.0 years
14 - 17 Lacs
Mysuru
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 1 week ago
15.0 - 20.0 years
6 - 10 Lacs
Mumbai
Work from Office
LocationMumbai Experience15+ years in data engineering/architecture Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: * Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable
Posted 1 week ago
8.0 - 13.0 years
5 - 8 Lacs
Mumbai
Work from Office
Role Overview : Seeking an experienced Apache Airflow specialist to design and manage data orchestration pipelines for batch/streaming workflows in a Cloudera environment. Key Responsibilities : Design, schedule, and monitor DAGs for ETL/ELT pipelines Integrate Airflow with Cloudera services and external APIs Implement retries, alerts, logging, and failure recovery Collaborate with data engineers and DevOps teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Experience 3–8 years Expertise in Airflow 2.x, Python, Bash Knowledge of CI/CD for Airflow DAGs Proven experience with Cloudera CDP, Spark/Hive-based data pipelines Integration with Kafka, REST APIs, databases
Posted 1 week ago
15.0 - 20.0 years
5 - 9 Lacs
Mumbai
Work from Office
Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience3–15 years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus
Posted 1 week ago
6.0 - 11.0 years
14 - 17 Lacs
Pune
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back-end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 1 week ago
1.0 - 3.0 years
3 - 7 Lacs
Chennai
Hybrid
Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake
Posted 1 week ago
4.0 - 9.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined
Posted 1 week ago
5.0 years
3 - 5 Lacs
Hyderābād
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. ML Ops Engineer (Senior Consultant) Key Responsibilities: Lead the design, implementation, and maintenance of scalable ML infrastructure. Collaborate with data scientists to deploy, monitor, and optimize machine learning models. Automate complex data processing workflows and ensure data quality. Optimize and manage cloud resources for cost-effective operations. Develop and maintain robust CI/CD pipelines for ML models. Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. Mentor and guide junior team members, fostering a culture of continuous learning. Work closely with cross-functional teams to understand requirements and deliver innovative solutions. Drive best practices and standards for ML Ops within the organization. Required Skills and Experience: Minimum 5 years of experience in infrastructure engineering. Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. Extensive experience with SageMaker, ECR, S3, Lamba functions, Cloud capabilities and deployment of ML models. Strong proficiency in Python scripting and other programming languages. Experience with CI/CD tools and practices. Solid understanding of the machine learning lifecycle and best practices. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work collaboratively in a team environment. Demonstrated ability to take ownership and drive projects to completion. Proven experience in leading and mentoring teams. Beneficial Skills and Experience: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with data visualization tools and techniques. Knowledge of big data technologies (Spark, Hadoop). Experience with version control systems (Git). Understanding of data governance and security best practices. Experience with monitoring and logging tools (Prometheus, Grafana). Stakeholder management skills and ability to communicate technical concepts to non-technical audiences. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Details 1Role -Senior Developer 2Required Technical Skill Set - Spark/Scala/Unix 3Desired Experience Range -5-8 years 4Location of Requirement - Pune Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than 3-5) Minimum 4+ years of experience in development of Spark Scala Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc Experience in debugging the Spark code Working knowledge of basic UNIX commands and shell script Experience of Autosys, Gradle Good-to-Have Good analytical and debugging skills Ability to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time status Write clear and precise documentation / specification Work in an agile environment Create documentation and document all developed mappings SN Responsibility of / Expectations from the Role 1 Create Scala/Spark jobs for data transformation and aggregation 2 Produce unit tests for Spark transformations and helper methods 3 Write Scaladoc-style documentation with all code 4 Design data processing pipelines Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ninja Van is a late-stage logtech startup that is disrupting a massive industry with innovation and cutting edge technology. Launched 2014 in Singapore, we have grown rapidly to become one of Southeast Asia's largest and fastest-growing express logistics companies. Since our inception, we’ve delivered to 100 million different customers across the region with added predictability, flexibility and convenience. Join us in our mission to connect shippers and shoppers across Southeast Asia to a world of new possibilities. More About Us We process 250 million API requests and 3TB of data every day. We deliver more than 2 million parcels every day. 100% network coverage with 2600+ hubs and stations in 6 SEA markets (Singapore, Malaysia, Indonesia, Thailand, Vietnam and Philippines), reaching 500 million consumers. 2 Million active shippers in all e-commerce segments, from the largest marketplaces to the individual social commerce sellers. Raised more than US$500 million over five rounds. We are looking for world-class talent to join our crack team of engineers, product managers and designers. We want people who are passionate about creating software that makes a difference to the world. We like people who are brimming with ideas and who take initiative rather than wait to be told what to do. We prize team-first mentality, personal responsibility and tenacity to solve hard problems and meet deadlines. As part of a small and lean team, you will have a very direct impact on the success of the company. Roles & Responsibilities Design, develop, and maintain Ninja Van's infrastructure for data streaming, processing, and storage . Build tools to ensure effective maintenance and monitoring of the data infrastructure. Contribute to key architectural decisions for data pipelines and lead the implementation of major initiatives. Collaborate with stakeholders to deliver scalable and high-performance solutions for data requirements, including extraction, transformation, and loading (ETL) from diverse data sources. Enhance the team's data capabilities by sharing knowledge , enforcing best practices , and promoting data-driven decision-making . Develop and enforce Ninja Van's data retention policies and backup strategies, ensuring data is stored redundantly and securely. Requirements Solid computer science fundamentals, excellent problem-solving skills, and a strong understanding of distributed computing principles. At least 8+ years of experience in a similar role, with a proven track record of building scalable and high-performance data infrastructure using Python, PySpark, Spark, and Airflow. Expert-level SQL knowledge and extensive experience working with both relational and NoSQL databases. Advanced knowledge of Apache Kafka, along with demonstrated proficiency in Hadoop v2, HDFS, and MapReduce. Hands-on experience with stream-processing systems (e.g., Storm, Spark Streaming), big data querying tools (e.g., Pig, Hive, Spark), and data serialization frameworks (e.g., Protobuf, Thrift, Avro). [Good to have] Familiarity with infrastructure-as-code technologies like Terraform, Terragrunt, Ansible, or Helm. Don’t worry if you don’t have this experience—what matters is your interest in learning! [Good to have] Experience with Change Data Capture (CDC) technologies such as Maxwell or Debezium. Bachelor’s or Master’s degree in Computer Science or a related field from a top university. Tech Stack Backend: Play (Java 8+), Golang, Node.js , Python, FastAPI Frontend: AngularJS, ReactJS Mobile: Android, Flutter, React Native Cache: Hazelcast, Redis Data storage: MySQL, TiDB, Elasticsearch, Delta Lake Infrastructure monitoring: Prometheus, Grafana Orchestrator: Kubernetes Containerization: Docker, Containerd Cloud Provider: GCP, AWS Data pipelines: Apache Kafka, Spark Streaming, Maxwell/Debezium, PySpark, TiCDC Workflow manager: Apache Airflow Query engines: Apache Spark, Trino Submit a job application By applying to the job, you acknowledge that you have read, understood and agreed to our Privacy Policy Notice (the “Notice”) and consent to the collection, use and/or disclosure of your personal data by Ninja Logistics Pte Ltd (the “Company”) for the purposes set out in the Notice. In the event that your job application or personal data was received from any third party pursuant to the purposes set out in the Notice, you warrant that such third party has been duly authorised by you to disclose your personal data to us for the purposes set out in the the Notice. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We’re in an unbelievably exciting area of tech and are fundamentally reshaping the data storage industry. Here, you lead with innovative thinking, grow along with us, and join the smartest team in the industry. This type of work—work that changes the world—is what the tech industry was founded on. So, if you're ready to seize the endless opportunities and leave your mark, come join us. About The Role We are seeking a Senior Manager – Data & Analytics to lead enterprise-scale data science and analytics initiatives, focused on activating curated datasets and modernizing the organization's data infrastructure. This role will lead the strategy, design, and implementation of scalable analytics models in partnership with our enterprise data warehouse and big data platforms. The ideal candidate will combine deep technical expertise in data science and engineering with the business acumen to influence senior stakeholders and drive high-impact decisions. Key Responsibilities Team Management :Direct and mentor the data science team to design, build, and deploy advanced analytics models and solutions Data Pipeline :Design scalable pipelines and workflows for large-scale data processing with high reliability and performance Model Development:Oversee development of ML/AI-driven predictive and prescriptive models with a focus on operationalization. Big Data Strategy: Drive scalable analytics solutions using Spark, Hadoop, Snowflake, S3, and cloud-native big data architectures Code Optimization: Supervise automation and optimization of data integration and analysis workflows using SQL, Python, and modern tools Cloud Management: Manage datasets on Snowflake and similar platforms with emphasis on governance and best practices. Model Maintenance: Define practices for model monitoring, retraining, and documentation to ensure long-term relevance and compliance. Stakeholder Engagement: Collaborate with stakeholders to understand needs, prioritize projects, and create solutions that drive measurable outcomes Continuous Improvement: Champion innovation by integrating emerging technologies and techniques into the team’s toolkit. Drive a culture of continuous improvement by staying abreast of advancements in data science and integrating innovative methods into workflows. Mentorship: Foster growth, collaboration, and knowledge sharing within the data science team and across the broader analytics community. Basic Qualifications Masters OR Ph.D. with 15 years, with 10+ years of relevant experience in Data Science , Statistics, operational research or related field. Hands-on experience with machine learning models, both supervised and unsupervised, in large- scale production settings Proficiency in Python, SQL, and modern ML frameworks. Extensive experience with big data technologies such as Hadoop, Spark, MapReduce , and Snowflake. Track record of translating data into business impact and influencing senior stakeholders. Strong foundation in data modeling and governance aligned with data warehouse best practices. Excellent written and verbal communication skills. Preferred Qualifications Experience with orchestration tools (e.g., Airflow, dbt). Familiarity with BI/visualization tools such as Tableau, Looker, or Power BI. Experience working with cross-functional business units Background in building and leading enterprise-level data science or advanced analytics programs. Understanding of ethical implications and governance practices related to data science and ML. What You Can Expect From Us Pure Innovation: We celebrate those who think critically, like a challenge and aspire to be trailblazers. Pure Growth: We give you the space and support to grow along with us and to contribute to something meaningful. We have been Named Fortune's Best Large Workplaces in the Bay Area™, Fortune's Best Workplaces for Millennials™ and certified as a Great Place to Work®! Pure Team: We build each other up and set aside ego for the greater good. And because we understand the value of bringing your full and best self to work, we offer a variety of perks to manage a healthy balance, including flexible time off, wellness resources and company-sponsored team events. Check out purebenefits.com for more information. Accommodations And Accessibility Candidates with disabilities may request accommodations for all aspects of our hiring process. For more on this, contact us at TA-Ops@purestorage.com if you’re invited to an interview. Where Differences Fuel Innovation We’re forging a future where everyone finds their rightful place and where every voice matters. Where uniqueness isn’t just accepted but embraced. That’s why we are committed to fostering the growth and development of every person, cultivating a sense of community through our Employee Resource Groups and advocating for inclusive leadership. At Pure Storage, diversity, equity, inclusion and sustainability are part of our DNA because we believe our people will shape the next chapter of our success story. Pure Storage is proud to be an equal opportunity employer. We strongly encourage applications from Indigenous Peoples, racialized people, people with disabilities, people from gender and sexually diverse communities, and people with intersectional identities. We also encourage you to apply even if you feel you don’t match all of the role criteria. If you think you can do the job and feel you’re a good match, please apply. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Build the future of the AI Data Cloud. Join the Snowflake team. We are looking for a Solutions Architect to be part of our Professional Services team to deploy cloud products and services for our customers. This person must be a hands-on, self-starter who loves solving innovative problems in a fast-paced, agile environment. The ideal candidate will have the insight to connect a specific business problem and Snowflake’s solution and communicate that connection and vision to various technical and executive audiences. The person we’re looking for shares our passion about reinventing the data platform and thrives in the dynamic environment. That means having the flexibility and willingness to jump in and get done what needs to be done to make Snowflake and our customers successful. It means keeping up to date on the ever-evolving technologies for data and analytics in order to be an authoritative resource for Snowflake, System Integrators and customers. And it means working collaboratively with a broad range of people both inside and outside the company. AS A SOLUTIONS ARCHITECT AT SNOWFLAKE, YOU WILL: Be a technical expert on all aspects of Snowflake Guide customers through the process of migrating to Snowflake and develop methodologies to improve the migration process Deploy Snowflake following best practices, including ensuring knowledge transfer so that customers are properly enabled and are able to extend the capabilities of Snowflake on their own Work hands-on with customers to demonstrate and communicate implementation best practices on Snowflake technology Maintain deep understanding of competitive and complementary technologies and vendors and how to position Snowflake in relation to them Work with System Integrator consultants at a deep technical level to successfully position and deploy Snowflake in customer environments Provide guidance on how to resolve customer-specific technical challenges Support other members of the Professional Services team develop their expertise Collaborate with Product Management, Engineering, and Marketing to continuously improve Snowflake’s products and marketing. OUR IDEAL SOLUTIONS ARCHITECT WILL HAVE: Minimum 10 years of experience working with customers in a pre-sales or post-sales technical role Experience migrating from one data platform to another and holistically addressing the unique challenges of migrating to a new platform University degree in computer science, engineering, mathematics or related fields, or equivalent experience Outstanding skills presenting to both technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos Understanding of complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools Strong skills in databases, data warehouses, and data processing Extensive hands-on expertise with SQL and SQL analytics Experience and track record of success selling data and/or analytics software to enterprise customers; includes proven skills identifying key stakeholders, winning value propositions, and compelling events Extensive knowledge of and experience with large-scale database technology (e.g. Netezza, Exadata, Teradata, Greenplum, etc.) Software development experience with C/C++ or Java.Scripting experience with Python, Ruby, Perl, Bash. Ability and flexibility to travel to work with customers on-site BONUS POINTS FOR THE FOLLOWING: Experience with non-relational platforms and tools for large-scale data processing (e.g. Hadoop, HBase) Familiarity and experience with common BI and data exploration tools (e.g. Microstrategy, Business Objects, Tableau) Experience and understanding of large-scale infrastructure-as-a-service platforms (e.g. Amazon AWS, Microsoft Azure, OpenStack, etc.) Experience implementing ETL pipelines using custom and packaged tools Experience using AWS services such as S3, Kinesis, Elastic MapReduce, Data pipeline Experience selling enterprise SaaS software Proven success at enterprise software WHY JOIN OUR PROFESSIONAL SERVICES TEAM AT SNOWFLAKE? Unique opportunity to work on a truly disruptive software product Get unique, hands-on experience with bleeding edge data warehouse technology Develop, lead and execute an industry-changing initiative Learn from the best! Join a dedicated, experienced team of professionals. Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru South, Karnataka, India
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. As part of our diverse tech team, you can design, code and ship software that makes us an essential part of our customers’ digital lives. Here, you can work alongside talented data engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. About American Express Technology Business Enablement: American Express Technology Business Enablement team enables us to transform Product Development practices through strategic frameworks, processes, tools and actionable insights. Job Description: As a Data Engineer, you will be responsible for designing, developing, and maintaining robust and scalable framework/services/application/pipelines for processing huge volume of data. You will work closely with cross-functional teams to deliver high-quality software solutions that meet our organizational needs. Key Responsibilities: Design and develop solutions using Bigdata tools and technologies like MapReduce, Hive, Spark etc. Extensive hands-on experience in object-oriented programming using Python, PySpark APIs etc. Experience in building data pipelines for huge volume of data. Experience in designing, implementing, and managing various ETL job execution flows. Experience in implementing and maintaining Data Ingestion process. Hands on experience in writing basic to advance level of optimized queries using HQL, SQL & Spark. Hands on experience in designing, implementing, and maintaining Data Transformation jobs using most efficient tools/technologies. Ensure the performance, quality, and responsiveness of solutions. Participate in code reviews to maintain code quality. Should be able to write shell scripts. Utilize Git for source version control. Set up and maintain CI/CD pipelines. Troubleshoot, debug, and upgrade existing application & ETL job chains. Required Skills and Qualifications: Bachelor’s degree in Computer Science Engineering, or a related field. Proven experience as Data Engineer or similar role. Strong proficiency in Object Oriented programming using Python. Experience with ETL jobs design principles. Solid understanding of HQL, SQL and data modeling Knowledge on Unix/Linux and Shell scripting principles. Familiarity with Git and version control systems. Experience with Jenkins and CI/CD pipelines. Knowledge of software development best practices and design patterns. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Hands-on experience with Google Cloud. We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
India
Remote
Kindly share your resume lakshmi.b@iclanz.com or hr@iclanz.com Position: Lead Data Engineer - Health Care domain Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities • Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies • Monitoring active ETL jobs in production. • Build out data lineage artifacts to ensure all current and future systems are properly documented • Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes • Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies • Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations • Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. • Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. Required Skills • This job has no supervisory responsibilities. • Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years’ experience in business analytics, data science, software development, data modeling or data engineering work • 5+ years’ experience with a strong proficiency with SQL query/development skills • Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks • Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) • Experience working in the healthcare industry with PHI/PII • Creative, lateral, and critical thinker • Excellent communicator • Well-developed interpersonal skills • Good at prioritizing tasks and time management • Ability to describe, create and implement new solutions • Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) • Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) • Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume). Details Required for Submission: Requirement Name: First Name Last Name Email id: Best Number: Current Organization / Previous Organization you Worked (last date): Currently working on a project: Total Experience: Relevant Experience Primary Skills Years of Experience Ratings (out of 10) Data Engineer : ETL : Healthcare (PHI/PII): Fivetran: DBT: LinkedIn profile: Comfortable to work from 03.00 pm to 12.00 am IST? Communication: Education Details – Degree & Passed out year: Notice Period: Vendor Company Name: iClanz Inc expected Salary: Current Location / Preferred Location: Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Data Engineer Astreya offers comprehensive IT support and managed services. These services include Data Center and Network Management, Digital Workplace Services (like Service Desk, Audio Visual, and IT Asset Management), as well as Next-Gen Digital Engineering services encompassing Software Engineering, Data Engineering, and cybersecurity solutions. Astreya's expertise lies in creating seamless interactions between people and technology to help organizations achieve operational excellence and growth. Job Description We are seeking experienced Data Engineer to join our analytics division. You will be aligned with our Data Analytics and BI vertical. You will have to conceptualize and own the build out of problem-solving data marts for consumption by data science and BI teams, evaluating design and operational tradeoffs within systems. Design, develop, and maintain robust data pipelines and ETL processes using data platforms for the organization's centralized data warehouse. Create or contribute to frameworks that improve the efficacy of logging data, while working with the Engineering team to triage issues and resolve them. Validate data integrity throughout the collection process, performing data profiling to identify and comprehend data anomalies. Influence product and cross-functional (engineering, data science, operations, strategy) teams to identify data opportunities to drive impact. Requirements Experience & Education Bachelor's degree in Computer Science, Mathematics, a related field, or equivalent practical experience. 5 years of experience coding with SQL or one or more programming languages (e.g., Python, Java, R, etc.) for data manipulation, analysis, and automation 5 years of experience designing data pipelines (ETL) and dimensional data modeling for synchronous and asynchronous system integration and implementation. Experience in managing troubleshooting technical issues, and working with Engineering and Sales Services teams. Preferred qualifications: Master’s degree in Engineering, Computer Science, Business, or a related field. Experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computing. Experience with experimentation infrastructure, and measurement approaches in a technology platform. Experience with data processing software (e.g., Hadoop, Spark) and algorithms (e.g., MapReduce, Flume). Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Spring Boot, Java2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2