Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 9.0 years
12 - 17 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. About Target Tech: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Target s investments in technology and innovation. We re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests and we do so with a focus on diversity and inclusion, experimentation and continuous learning. About Team A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in in Marketing, Supply Chain Optimization, Network Security and Personalization rely on. Position Overview Develop a strong understanding of business and operational processes. Analyse large datasets to derive insights for business process improvements and solution development. Develop optimization-based solutions, mathematical models (probabilistic/deterministic models), predictive models, and implement them in real-world production systems with measurable impact. Add new capabilities and features to the simulation framework to reflect the complexities of an evolving supply chain network. Develop and deploy modules to run simulations for testing and validating multiple scenarios to evaluate the impact of various inventory purchasing and management strategies. Enhance and maintain the simulation environment to enable testing/deploying new features for running custom scenarios. Coordinate the analysis, troubleshooting, and resolution of issues in models and software. About You Bachelor's/MS/PhD in Mathematics, Statistics, Operations Research, Industrial Engineering, Physics, Computer Science, or related fields. 4+ years of relevant experience. Experience in applying Operations Research to solve complex problems in Supply chain or related domain. Strong analytical thinking and data visualization skills. Ability to creatively solve business problems and innovate new approaches. Experience cleaning, transforming, and analysing large datasets for insights leading to business improvements. Proficiency in Python, SQL, Hadoop/Hive, Spark. Strong experience in JVM-based languages like Java or Kotlin preferred. Knowledge of mathematical and statistical concepts, optimization, data structures, algorithms, data analysis, simulations, and visualizations applied to business problems. Good working knowledge of machine learning, probability estimation methods, linear programming (integer, real, and mixed integer), stochastic processes, and their applications. Know More About Us Here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Follow us on social media https://www.linkedin.com/company/target/ Target Tech- https://tech.target.com/
Posted 1 week ago
2.0 - 3.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. At Target, we have a timeless purpose and a proven strategy and that hasn t happened by accident. Some of the best minds from diverse backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values diverse backgrounds. We believe your unique perspective is important, and you'll build relationships by being authentic and respectful. At Target, inclusion is part of the core value. We aim to create equitable experiences for all, regardless of their dimensions of difference. As an equal opportunity employer, Target provides diverse opportunities for everyone to grow and win Behind one of the world s best loved brands is a uniquely capable and brilliant team of data scientists, engineers and analysts. The Target Data & Analytics team creates the tools and data products to sustainably educate and enable our business partners to make great data-based decisions at Target. We help develop the technology that personalizes the guest experience, from product recommendations to relevant ad content. We re also the source of the data and analytics behind Target s Internet of Things (iOT) applications, fraud detection, Supply Chain optimization and demand forecasting. We play a key role in identifying the test-and-measure or A/B test opportunities that continuously help Target improve the guest experience, whether they love to shop in stores or at Target.com. About this career Role is about being passionate about data, analysis, metrics development, feature experimentation and its application to improve both business strategies, as well as support to GSCL operations team Develop, model and apply analytical best practices while upskilling and coaching others on new and emerging technologies to raise the bar for performance in analysis by sharing with others (clients, peers, etc.) well documented analytical solutions . Drive a continuous improvement mindset by seeking out new ways to solve problems through formal trainings, peer interactions and industry publications to continually improve technically, implement best practices and analytical acumen Be expert in specific business domain, self-directed and drive execution towards outcomes, understand business inter-dependencies, conduct detailed problem solving, remediate obstacles, use independent judgement and decision making to deliver as per product scope, provide inputs to establish product/ project timelines Participate in learning forums, or be a buddy to help increase awareness and adoption of current technical topics relevant for analytics competency e.g. Tools (R, Python); exploratory & descriptive techniques ( basic statistics and modelling) Champion participation in internal meetups, hackathons; presents in internal conferences, relevant to analytics competency Contribute the evaluation and design of relevant technical guides and tools to hire great talent by partnering with talent acquisition Participate in Agile ceremonies to keep the team up-to-date on task progress, as needed Develop and analyse data reports/Dashboards/pipelines, do RCA and troubleshooting of issues that arise using exploratory and systemic techniques About you B.E/B.Tech (2-3 years of relevant exp), M.Tech, M.Sc. , MCA (+2 years of relevant exp) Candidates with strong domain knowledge and relevant experience in Supply Chain / Retails analytics would be highly preferred Strong data understanding inference of patterns, root cause, statistical analysis, understanding forecasting/predictive modelling, , etc. Advanced SQL experience writing complex queries Hands on experience with analytics toolsHadoop, Hive, Spark, Python, R, Domo and/or equivalent technologies Experience working with Product teams and business leaders to develop product roadmaps and feature development Able to support conclusions with analytical evidence using descriptive stats, inferential stats and data visualizations Strong analytical, problem solving, and conceptual skills. Demonstrated ability to work with ambiguous problem definitions, recognize dependencies and deliver impact solutions through logical problem solving and technical ideations Excellent communication skills with the ability to speak to both business and technical teams, and translate ideas between them Intellectually curious, high energy and a strong work ethic Comfort with ambiguity and open-ended problems in support of supply chain operations Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits
Posted 1 week ago
3.0 - 4.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Target is an iconic brand, a Fortune 50 company, and one of America s leading retailers. Global Supply Chain and Logistics at Target is evolving at an incredible pace. We are constantly reimagining how we get the right product to guests faster, better, and more cost-effectively across 1,900 locations. A role with the Data Science team means building scalable data science products to support the ever-changing supply chain landscape. This involves making smarter, automated, and algorithmic decisions. Evaluating product flow from vendor to distribution center and stores across the first mile, middle mile, and last mile, with a focus on inventory modeling and replenishment, to improve operating efficiencies both within the distribution center and across the network. We are looking for exceptional people who are proactive, creative, independent, innovative, and comfortable working in varying degrees of ambiguity. Are you a creative problem-solver who seeks root causes, simplifies problems, quickly identifies solutions, commits to a plan, and positively influences others to execute itIf so, you will thrive on this dynamic team. About the Role: Our Supply Chain Data Science team oversees the development of state-of-the-art mathematical techniques to solve key problems for Target s Supply Chain, such as identifying optimal inventory quantities and positioning across multiple channels and locations, planning the right mix of inventory investments versus guest experience, digital order fulfillment planning, transportation resource planning, and more. As a Senior Data Scientist, you will collaborate with Product, Tech, and business partners to solve retail challenges at scale for our supply chain network. You will design, develop, deploy, and maintain data science models and tools. You ll work closely with applied data scientists, data analysts, and business partners to continuously learn and address evolving business needs. You ll also collaborate with engineers and data scientists on peer teams to build and productionize supply chain solutions. Responsibilities: Develop a strong understanding of business and operational processes within Target s Supply Chain. Gain an in-depth understanding of systems and processes influencing supply chain efficiency. Analyze large datasets to derive insights for business process improvements and solution development. Develop optimization-based solutions, mathematical models (probabilistic/deterministic models), predictive models, and implement them in real-world production systems with measurable impact. Collaborate with the team to build and maintain complex software systems and tools. Add new capabilities and features to the simulation framework to reflect the complexities of an evolving supply chain network. Develop and deploy modules to run simulations for testing and validating multiple scenarios to evaluate the impact of various inventory purchasing and management strategies. Enhance and maintain the simulation environment to enable testing/deploying new features for running custom scenarios. Coordinate the analysis, troubleshooting, and resolution of issues in models and software. : Bachelor's/MS/PhD in Mathematics, Statistics, Operations Research, Industrial Engineering, Physics, Computer Science, or related fields. 3-4+ years of relevant experience. Experience in applying Operations Research to solve complex problems in Supply chain or related domain. Strong analytical thinking and data visualization skills. Ability to creatively solve business problems and innovate new approaches. Experience cleaning, transforming, and analyzing large datasets for insights leading to business improvements. Supply chain data experience preferred but not required. Experience developing, testing, and maintaining large codebases in a collaborative environment while meeting industry best practices. Proficiency in Python, SQL, Hadoop/Hive, Spark. Strong experience in JVM-based languages like Java or Kotlin preferred. Knowledge of mathematical and statistical concepts, optimization, data structures, algorithms, data analysis, simulations, and visualizations applied to business problems. Good working knowledge of machine learning, probability estimation methods, linear programming (integer, real, and mixed integer), stochastic processes, and their applications. Experience deploying solutions with large-scale business impact. Self-driven and results-oriented, with the ability to meet tight timelines. Strong team player with the ability to collaborate across geographies/time zones. Excellent written and verbal communication skills. Why Work with Us at Target Work directly on data science problems with the most impact on Target's entire supply chain, forecasting, and merchandising teams. We love open-source! Many of our team members contribute to open-source communities during work hours, and we try to contribute back where we can. We value diversity and inclusion in our teams and believe it s key to creating positive in-store experiences for our guests. We offer flexibility to team members' schedules and work arrangements, enabling them to flourish both inside and outside of work.
Posted 1 week ago
4.0 - 9.0 years
12 - 17 Lacs
Bengaluru
Work from Office
About Us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up . Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Pyramid Overview As a Sr Engineer in Data Sciences , y ou will play crucial role in designing, implementing, and optimizing the AI solutions in production. Additionally, you ll apply best practices in software design, participate in code reviews, create a maintainable well-tested codebase with relevant documentation. We will look to you to understand and actively follow foundational programming principles (best practices, know about unit tests, code organization, basics of CI/CD etc.) and create a well-maintainable & tested codebase with relevant documentation. You will get the opportunity to develop in one or more approved programming languages (Java, Scala, Python, R), and learn and adhere to best practices in data analysis and data understanding. Team Overview The Competitive Intelligence Data Sciences team at Target builds Data Science models in service of leveraging competitive information for various decisioning . The team plays a crucial role in helping Target stay competitive across various business functions. Position Overview For this specific role, you will be responsible for deploying and maintaining language models i n partnership with the cross functional technology team that spans across Product engineering, Data engineering & Data analytics . You will need to diagnose model performance, review summary statistics and iteratively improve the quality of the downstream decisions. About You: 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience MS in Computer Science, Applied Mathematics, Statistics, Physics or equivalent work or industry experience 4 plus years of experience in end-to-end application development, data exploration, data pipelining, API design, optimization of model latency Strong expertise in working with text data, embeddings, building & deploying NLP solutions and integrating with LLM services Expertise in MLOps frameworks and hands on experience in MLOps tools 2 plus years of experience building and deploying AI/ML algorithms into production environments - including model and system monitoring and troubleshooting Highly proficient in programming with Spark (Python and/or Scala) Good understanding of Big Data and Distributed Architecture- specifically Hadoop, Hive, Spark, Docker, Kubernetes and Kafka Excellent communication skills with the ability to clearly tell data driven stories through appropriate visualizations, graphs, and narratives Self-driven and results oriented; able to meet tight timelines Strong team player with ability to collaborate effectively across global team Understanding of retail industry is added advantage Bonus Points: Experience with Deep Learning frameworks TensorFlow, Pytorch or Keras Experience developing highly distributed AI/ML systems at scale Experience with Vertex AI Experience in mentoring the junior team members ML skillset and career development Experience in handling streaming data and setting up real-time services Know More about Us here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits https://india.target.com/life-at-target/belonging
Posted 1 week ago
5.0 - 8.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2335_JOB Date Opened 01/08/2024 Industry IT Services Job Type Work Experience 5-8 years Job Title Snowflake Developer City Bangalore South Province Karnataka Country India Postal Code 560066 Number of Positions 1 Contract duration 6 month Experience 5 + years Location WFH ( should have good internet connection ) Snowflake knowledge (Must have) Autonomous person SQL Knowledge (Must have) Data modeling (Must have) Datawarehouse concepts and DW design best practices (Must have) SAP knowledge (Good to have) SAP functional knowledge (Good to have) Informatica IDMC (Good to have) Good Communication skills, Team player, self-motivated and work ethics Flexibility in working hours12pm Central time (overlap with US team ) Confidence, proactiveness and demonstrate alternatives to mitigate tools/expertise gaps(fast learner). check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project description You will be working in a global team that manages and performs a global technical control. You'll be joining Assets Management team which is looking after asset management data foundation and operates a set of in-house developed tooling. As an IT engineer you'll play an important role in ensuring the development methodology is followed, and lead technical design discussions with the architects. Our culture centers around partnership with our businesses, transparency, accountability and empowerment, and passion for the future. Responsibilities Design, develop, and maintain scalable data solutions using Starburst. Collaborate with cross-functional teams to integrate Starburst with existing data sources and tools. Optimize query performance and ensure data security and compliance. Implement monitoring and alerting systems for data platform health. Stay updated with the latest developments in data engineering and analytics. Mandatory Skills Bachelor's degree or Masters in a related technical field; or equivalent related professional experience. Prior experience as a Software Engineer applying new engineering principles to improve existing systems including leading complex, well defined projects. Qubole or Trino/Starburst Strong knowledge of Big-Data Languages including SQL, Hive, Spark/Pyspark, Presto, Python Good knowledge and experience in cloud platforms such as AWS, GCP, or Azure. Nice-to-Have Skills Data Architecture & Engineering: Design and implement efficient and scalable data warehousing solutions using Azure Databricks and Microsoft Fabric. Business Intelligence & Data Visualization: Create insightful Power BI dashboards to help drive business decisions. Show more Show less
Posted 1 week ago
10.0 - 12.0 years
6 - 9 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1662_JOB Date Opened 17/12/2022 Industry Technology Job Type Work Experience 10-12 years Job Title Pyspark Lead City Bangalore Province Karnataka Country India Postal Code 560002 Number of Positions 4 LocationBangalore, Chennai Extract data from source system using Data Factory pipelines Massaging and Cleansing the data Transform data based on business rules Expose the data for reporting needs and exchange data with downstream applications. Standardize the various integration flows (e.g decom ALDML Init integration, simplify ALDML Delta integration). check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
2 - 5 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_1999_JOB Date Opened 17/06/2023 Industry Technology Job Type Work Experience 6-10 years Job Title ETL Tester City Chennai Province Tamil Nadu Country India Postal Code 600001 Number of Positions 1 Create test case documents/plan for testing the Data pipelines. Check the mapping for the fields that support data staging and in data marts & data type constraints of the fields present in snowflake Verify non-null fields are populated. Verify Business requirements and confirm if the correct logic Is implemented in the transformation layer of ETL process. Verify stored procedure calculations and data mappings. Verify data transformations are correct based on the business rules. Verify successful execution of data loading workflows check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1581_JOB Date Opened 25/11/2022 Industry Technology Job Type Work Experience 8-12 years Job Title Senior Specialist- Data Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Location:Pune/ Mumbai/ Bangalore/ Chennai Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
3 - 8 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1671_JOB Date Opened 20/12/2022 Industry Technology Job Type Work Experience 6-10 years Job Title Oracle Warehouse Builder/Developer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Roles & Responsibilities: Oracle Warehouse Builder, OWB, Oracle Workflow Builder, Oracle TBSS Oracle Warehouse Builder 9i (Client Version 9.0.2.62.3/Repository Version 9.0.2.0.0) Oracle Warehouse Builder 4 Oracle Workflow Builder 2.6.2 Oracle Database 10gTNS for IBM/AIX RISC System/6000Version 10.2.0.5.0 - Production More than 5 years experience on Oracle Warehouse Builder (OWB) and Oracle Workflow Builder Expert Knowledge on Oracle PL/SQL to develop individual code objects to entire DataMarts. Scheduling tools Oracle TBSS (DBMS_SCHEDULER jobs to create and run) and trigger based for file sources based on control files. Must have design and development experience in data pipeline solutions from different source systems (FILES, Oracle) to data lakes. Must have involved in creating/designing Hive tables and loading analyzing data using hive queries. Must have knowledge in CA Workload Automation DE 12.2 to create jobs and scheduling. Extensive knowledge on entire life cycle of Change/Incident/Problem management by using ServiceNow. Oracle Warehouse Builder 9i (Client Version 9.0.2.62.3/Repository Version 9.0.2.0.0). Oracle Warehouse Builder 4 Oracle Workflow Builder 2.6.2 Oracle Database 10gTNS for IBM/AIX RISC System/6000Version 10.2.0.5.0 - Production. Oracle Enterprise Manager 10gR1.(Monitoring jobs and tablespaces utilization) Extensive knowledge in fetching Mainframe Cobol files(ASCII AND EBSDIC formats) to the landing area and processing(formatting) and loading(Error handling) of these files to oracle tables by using SQL*Loader and External tables. Extensive knowledge in Oracle Forms 6 to integrate with OWB 4. Extensive knowledge on entire life cycle of Change/Incident/Problem management by using Service-Now. work closely with the Business owner teams and Functional/Data analysts in the entire development/BAU process. Work closely with AIX support, DBA support teams for access privileges and storage issues etc. work closely with the Batch Operations team and MFT teams for file transfer issues. Migration of Oracle to Hadoop eco system: Must have working experience in Hadoop eco system elements like HDFS,MapReduce,YARN etc. Must have working knowledge on Scala & Spark Dataframes to convert the existing code to Hadoop data lakes. Must have design and development experience in data pipeline solutions from different source systems (FILES, Oracle) to data lakes. Must have involved in creating/designing Hive tables and loading analyzing data using hive queries. Must have knowledge in creating Hive partitions, Dynamic partitions and buckets. Must have knowledge in CA Workload Automation DE 12.2 to create jobs and scheduling. Use Denodo for Data virtualization to the required data access for end users. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
12.0 - 15.0 years
13 - 17 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1688_JOB Date Opened 24/12/2022 Industry Technology Job Type Work Experience 12-15 years Job Title Big Data Architect City Mumbai Province Maharashtra Country India Postal Code 400008 Number of Positions 4 LocationMumbai, Pune, Chennai, Hyderabad, Coimbatore, Kolkata 12+ Years experience in Big data Space across Architecture, Design, Development, testing & Deployment, full understanding in SDLC. 1. Experience of Hadoop and related technology stack experience 2. Experience of the Hadoop Eco-system(HDP+CDP) / Big Data (especially HIVE) Hand on experience with programming languages such as Java/Scala/python Hand-on experience/knowledge on Spark3. Being responsible and focusing on uptime and reliable running of all or ingestion/ETL jobs4. Good SQL and used to work in a Unix/Linux environment is a must.5. Create and maintain optimal data pipeline architecture.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.7. Good to have cloud experience8. Good to have experience for Hadoop integration with data visualization tools like PowerBI. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
3 - 7 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2199_JOB Date Opened 15/04/2024 Industry Technology Job Type Work Experience 6-10 years Job Title Sr Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600004 Number of Positions 4 Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
3.0 - 5.0 years
7 - 11 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1843_JOB Date Opened 05/04/2023 Industry Technology Job Type Work Experience 3-5 years Job Title Collibra Data Governance SME City Mumbai Province Maharashtra Country India Postal Code 400708 Number of Positions 2 3+ years of hands-on experience on Collibra tool. Knowledge of Collibra DGC version 5.7 and onward Experience on Spring boot development Experience on Groovy and Flow able for BPMN workflow Development. Experience in both Business And Technical Metadata Experience on platform activity like job server setup and upgrade " Working as SME in data governance, metadata management and data catalog solutions, specifically on Collibra Data Governance. Client interface and consulting skills required Experience in Data Governance of wide variety of data types (structured, semi-structured and unstructured data) and wide variety of data sources (HDFS, S3, Kafka, Cassandra, Hive, HBase, Elastic Search) Partner with Data Stewards for requirements, integrations and processes, participate in meetings and working sessions Partner with Data Management and integration leads to improve Data Management technologies and processes. Working experience of Collibra operating model, workflow BPMN development, and how to integrate various applications or systems with Collibra Experience in setting up peoples roles, responsibilities and controls, data ownership, workflows and common processes Integrate Collibra with other enterprise toolsData Quality Tool, Data Catalog Tool, Master Data Management Solutions Develop and configure all Collibra customized workflows 10. Develop API (REST, SOAP) to expose the metadata functionalities to the end-users Location :Pan India check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
4.0 - 6.0 years
7 - 11 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1760_JOB Date Opened 21/03/2023 Industry Technology Job Type Work Experience 4-6 years Job Title Collibra Data Governance SME City Mumbai Province Maharashtra Country India Postal Code 400708 Number of Positions 1 Working as SME in data governance, metadata management and data catalog solutions, specifically on Collibra Data Governance. Client interface and consulting skills required Experience in Data Governance of wide variety of data types (structured, semi-structured and unstructured data) and wide variety of data sources (HDFS, S3, Kafka, Cassandra, Hive, HBase, Elastic Search) Partner with Data Stewards for requirements, integrations and processes, participate in meetings and working sessions Partner with Data Management and integration leads to improve Data Management technologies and processes. Working experience of Collibra operating model, workflow BPMN development, and how to integrate various applications or systems with Collibra Experience in setting up peoples roles, responsibilities and controls, data ownership, workflows and common processes Integrate Collibra with other enterprise toolsData Quality Tool, Data Catalog Tool, Master Data Management Solutions Develop and configure all Collibra customized workflows 10. Develop API (REST, SOAP) to expose the metadata functionalities to the end-users Location :Pan India check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2470_JOB Date Opened 03/05/2025 Industry IT Services Job Type Work Experience 6-10 years Job Title Sr. Data Engineer City Bangalore South Province Karnataka Country India Postal Code 560050 Number of Positions 1 Were looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization. Responsibilities: Lead the design of data warehouses, lakes, and ETL workflows. Collaborate with teams to gather requirements and build scalable solutions. Ensure data governance, security, and optimal performance of systems. Mentor junior engineers and drive end-to-end project delivery.: 6+ years of experience in data engineering, including at least 2 full-cycle datawarehouse projects. Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms. Expertise in big data tools (e.g., Apache Spark, Kafka). Excellent communication skills and leadership abilities.PreferredExperience with workflow orchestration tools (e.g., Airflow), real-time data,and DataOps practices. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 5 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2168_JOB Date Opened 10/04/2024 Industry Technology Job Type Work Experience 5-8 years Job Title AWS Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600002 Number of Positions 4 Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_1668_JOB Date Opened 19/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Sr. AWS Developer City Chennai Province Tamil Nadu Country India Postal Code 600001 Number of Positions 4 AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
5 - 9 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1624_JOB Date Opened 08/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Azure ADF & Power BI Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 4 Roles & Responsibilities: Resource must have 5+ years of hands on experience in Azure Cloud development (ADF + DataBricks) - mandatory Strong in Azure SQL and good to have knowledge on Synapse / Analytics Experience in working on Agile Project and familiar with Scrum/SAFe ceremonies. Good communication skills - Written & Verbal Can work directly with customer Ready to work in 2nd shift Good in communication and flexible Defines, designs, develops and test software components/applications using Microsoft Azure- Data-bricks, ADF, ADL, Hive, Python, Data bricks, SparkSql, PySpark. Expertise in Azure Data Bricks, ADF, ADL, Hive, Python, Spark, PySpark Strong T-SQL skills with experience in Azure SQL DW Experience handling Structured and unstructured datasets Experience in Data Modeling and Advanced SQL techniques Experience implementing Azure Data Factory Pipelines using latest technologies and techniques. Good exposure in Application Development. The candidate should work independently with minimal supervision check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
4.0 - 6.0 years
2 - 5 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_1941_JOB Date Opened 08/05/2023 Industry Technology Job Type Work Experience 4-6 years Job Title Denodo Admin City Chennai Province Tamil Nadu Country India Postal Code 600089 Number of Positions 5 Must Have Strong work Experience as a Denodo Administrator Experience in T-SQL programming languages. Experience working in Denodo v 6 and Denodo v 8. Strong knowledge on ITIL Process. Excellent Analytical, Debugging, Communication & Reporting Skills. Good to Have: Knowledge on Clustering & Load Balancing, platform upgrade. Experience in Control-M, Service Now, Azure DevOps. DomainBanking. Roles & Responsibility: RoleDenodo Administrator Responsible for Monitoring, Cache & DB Refresh, DB connection changes, Access provisioning, Proactive Prevention of Disk, Space Issues. Responsible for Automation of Memory & SQL Alerts Excellent communication skill and should work directly with customer. Need to work on Shift to Provide Support check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1963_JOB Date Opened 17/05/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Neo4j GraphDB Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 5 Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_2098_JOB Date Opened 13/01/2024 Industry Technology Job Type Contract Work Experience 5-8 years Job Title DCT Data Engineer City Pune City Province Maharashtra Country India Postal Code 411001 Number of Positions 4 LocationsPune, Bangalore, Indore Work modeWork from Office Informatica data quality - idq Azure databricks Azure data lake Azure Data Factory Api integration check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1628_JOB Date Opened 09/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Data Engineer City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? With a focus on digitization, innovation, and analytics, the Enterprise Digital and Data Solutions (EDDS) team creates central, scalable platforms and customer experiences to help markets across all of these priorities. Charter is to drive scale for the business and accelerate innovation for both immediate impact as well as long-term transformation of our business. A unique aspect of EDDS is the integration of diverse skills across all its remit. EDDS has a very broad range of responsibilities, resulting in a broad range of initiatives around the world. The American Express Enterprise Digital Experimentation & Analytics (EDEA) leads the Enterprise Product Analytics and Experimentation charter for Brand & Performance Marketing and Digital Acquisition & Membership experiences as well as Enterprise Platforms. The focus of this collaborative team is to drive growth by enabling efficiencies in paid performance channels & evolve our digital experiences with actionable insights & analytics. The team specializes in using data around digital product usage to drive improvements in the acquisition customer experience to deliver higher satisfaction and business value. Purpose of the Role: This role will report to the Director of Marketing Optimization Capabilities Analytics (MOCA) team within Enterprise Digital Experimentation & Analytics (EDEA) and will be based in Gurugram, India. The candidate will be responsible for leading the efforts on data strategy for Media Mix Model (MOCA) working with product and data science teams to develop advanced analytical solutions to media measurement problems, and address needs around implementation and use by partner finance and data teams and for delivery of highly impactful analytics to optimize the performance of Media marketing channels through the application of insights from MOCA models Responsibilities: Working with Data Science to define different model structures and approaches to measuring media impact Partnering with MOCA Product teams to develop and maintain products for data aggregation, simulation and scenario planning Lead a team of 4 to collate data needed for MOCA models across multiple markets and BUs. Perform trend analysis & work with marketing partners to lay out business context for the models Work with an external data aggregator to automate data pipes for MOCA Lead the efforts on model documentation for MOCA model certification Minimum Qualifications: Advanced degree in a quantitative field (e.g. Finance, Engineering, Mathematics, Computer Science) Strong programming skills are preferred. Some experience with Big Data programming languages (Hive, Spark), Python, SQL. Experience in large data processing and handling, understanding in data science is a plus. Ability to work in a dynamic, cross-functional environment, with strong attention to detail. Excellent communication skills with the ability to engage, influence, and encourage partners to drive collaboration and alignment. Preferred Qualifications: Strong analytical/conceptual thinking competence to solve unstructured and complex business problems and articulate key findings to senior leaders/partners in a succinct and concise manner. Basic knowledge of statistical techniques for experimentation & hypothesis testing, regression, t-test, chi-square test. Understanding of Media Mix Models and experience with modeling We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 week ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? The position is in Global Contact and Capacity Management (GCCM) . GCCM is responsible for all chat volume forecasting, capacity/staff planning, operational expense management, configuration, and real time performance management & monitoring for GSG across various markets globally. The group executes plans built by the Forecasting & Business Planning teams and manages 24/7 real-time performance in the voice and digital channels. The group ensures that robust schedules are designed to meet the demand of daily operations. The schedules are aligned to intraday/intraweek chat volume distributions for all markets and lines of business. The incumbent will be a part of the work force optimization pillar within Global Capacity & Contact Optimization team supporting Digital markets. Primary responsibilities would include short-term planning, scheduling, reporting and managing key performance indicators such as wait times, abandon rates, CHT, shrinkages and staffing optimization. Key Deliverables: · Interface with Analysts, Team leaders, and other members of management · Manage, update and report real-time activities in the department · Monitor Real Time Adherence (RTA) and communicate staffing discrepancies to Team Leaders · Record and Maintain a count of productive FTEs · Capacity Management for sub-processes · Work with Short Term Forecasting Team, for IDPs and Staffing · Leave Cap Formulation; provide advisory support on release of FTEs from the process · Communicate systems, voice response and Telecommunication issues to the department · Real time adherence, monitoring and communication. Raise awareness to RTA issues that are impacting service level and aging objectives · Proactively identify improvement opportunities on things such as shift mix, hours of operation etc. · Analyze and define at regular intervals, best time to contact Card members to improve total Contacts in the process. · In-bound chat pattern analysis, trending and staff alignment · Maintain strong relationships with the Team Leaders and SDL 's to improve overall understanding and awareness of daily/weekly business impacts · Feedback, Huddle timings, training schedules and other Off-The-Phone activities Minimum Qualifications Functional skills: · Bachelor’s degree (Mathematics / Statistics/ Data Analytics); MBA or equivalent is a plus · 2+ years of relevant experience in Workforce Planning/ Operations/MIS analytics would be preferred · Proficiency in Workforce Management tools such as Avaya, eWFM, Genesys as well as understanding of call center volume drivers and forecasting/workforce planning processes would be an added advantage · Strong written and verbal communication skills with demonstrated success in creating and conducting presentations to large / senior / challenging audiences, a plus · Strong Organizational and Project Management skills · Proven ability to manage multiple priorities effectively with a track record of driving results effectively while meeting deadlines · Strong relationship and collaboration skills, including the ability to work in a highly matrixed environment Behavioral Skills/Capabilities: · Delivers high quality work with direction and oversight · Understands work goals and seeks to understand its importance to the BU and/or the Blue Box · Feels comfortable taking decisions/ calculated risks based on facts and intuition · Flexible to quickly adjust around shifting priorities, multiple demands, ambiguity, and rapid change · Maintains a positive attitude when presented with a barrier · Demonstrated ability to challenge the status quo & build consensus Technical Skills/ Knowledge of platforms: · Proficiency with Microsoft Office, especially Excel, and PowerPoint · Working experience of Power BI would be needed · Project management skills, knowledge and experience of successfully leading projects, a plus · Ability to handle large data sets & prior programming experience in SAS, SQL, Python and/or HQL (Hive Query Language) to write codes independently and efficiently will be useful · Knowledge of machine learning will be an added advantage · Exposure to Big Data Platforms such as Cornerstone & visualization tools like Tableau, a nice to have We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 week ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs) Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.
These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.
The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.
Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.
As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.