Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
2 - 2 Lacs
Gurgaon
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Data Scientist Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team: As consumer preference for digital payments continues to grow, ensuring a seamless and secure consumer experience is top of mind. Optimization Soltions team focuses on tracking of digital performance across all products and regions, understanding the factors influencing performance and the broader industry landscape. This includes delivering data-driven insights and business recommendations, engaging directly with key external stakeholders on implementing optimization solutions (new and existing), and partnering across the organization to drive alignment and ensure action is taken. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data-driven decision-making? Are you motivated to be part of a team that builds large-scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data science & analytics in the company? The Role: Work closely with global optimization solutions team to architect, develop, and maintain advanced reporting and data visualization capabilities on large volumes of data to support data insights and analytical needs across products, markets, and services The candidate for this position will focus on Building solutions using Machine Learning and creating actionable insights to support product optimization and sales enablement. Prototype new algorithms, experiment, evaluate and deliver actionable insights. Drive the evolution of products with an impact focused on data science and engineering. Designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Perform data ingestion, aggregation, and processing on high volume and high dimensionality data to drive and enable data unification and produce relevant insights. Continuously innovate and determine new approaches, tools, techniques & technologies to solve business problems and generate business insights & recommendations. Apply knowledge of metrics, measurements, and benchmarking to complex and demanding solutions. All about You A superior academic record at a leading university in Computer Science, Data Science, Technology, mathematics, statistics, or a related field or equivalent work experience Experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis Strong analytical skills with track record of translating data into compelling insights Prior experience working in a product development role. knowledge of ML frameworks, libraries, data structures, data modeling, and software architecture. proficiency in using Python/Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi), and SQL to build Big Data products & platforms Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI is a plus. Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Ability to build a strong narrative on the business value of products and actively participate in sales enablement efforts. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 4 days ago
8.0 years
6 - 8 Lacs
Chennai
On-site
Develop, test, and deploy data processing applications using Apache Spark and Scala. Optimize and tune Spark applications for better performance on large-scale data sets. Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions. Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions. Design and implement high-performance data processing and analytics solutions. Ensure data integrity, accuracy, and security across all processing tasks. Troubleshoot and resolve performance issues in Spark, Cloudera, and related technologies. Implement version control and CI/CD pipelines for Spark applications. Required Skills & Experience: Minimum 8 years of experience in application development. Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing. Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop. Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi. Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data. Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL. Knowledge of data warehousing concepts, dimensional modeling, and data lakes. Ability to troubleshoot and optimize Spark and Cloudera platform performance. Familiarity with version control tools like Git and CI/CD tools (e.g., Jenkins, GitLab).
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience: 5+ Years Role Overview: Responsible for designing, building, and maintaining scalable data pipelines and architectures. This role requires expertise in SQL, ETL frameworks, big data technologies, cloud services, and programming languages to ensure efficient data processing, storage, and integration across systems. Requirements: • Minimum 5+ years of experience as a Data Engineer or similar data-related role. • Strong proficiency in SQL for querying databases and performing data transformations. • Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, or custom-built solutions). • Proficiency in at least one programming language such as Python, Java, or Scala for data processing tasks. • Experience with cloud-based data services and Datalakes (e.g., Snowflake, Databricks, AWS S3, GCP BigQuery, or Azure Data Lake). • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). • Experience with ETL tools (e.g., Talend, Apache NiFi, SSIS, etc.) and data integration techniques. • Knowledge of data warehousing concepts and database design principles. • Good understanding of NoSQL and Big Data Technologies like MongoDB, Cassandra, Spark, Hadoop, Hive, • Experience with data modeling and schema design for OLAP and OLTP systems. • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Educational Qualification: Bachelor’s/Master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Role: Senior Data Engineer Experience: 4-8 years Job locations: Coimbatore, Chennai, Bangalore, Hyderabad Responsibilities Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies. Required Skills Strong programming skills in Python, Scala, or Java. Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can contact us at +91 7305206696/ saranyadevib@talentien.com Skills: sql,data warehousing,aws,cloud,hadoop,scala,java,python,data engineering,azure,cloud technologies (aws, azure, gcp),etl processes,data modeling,nosql,pyspark,etl Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Role: Senior Data Engineer Experience: 4-8 years Job locations: Coimbatore, Chennai, Bangalore, Hyderabad Responsibilities Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies. Required Skills Strong programming skills in Python, Scala, or Java. Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can contact us at +91 7305206696/ saranyadevib@talentien.com Skills: sql,data warehousing,aws,cloud,hadoop,scala,java,python,data engineering,azure,cloud technologies (aws, azure, gcp),etl processes,data modeling,nosql,pyspark,etl Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Role: Senior Data Engineer Experience: 4-8 years Job locations: Coimbatore, Chennai, Bangalore, Hyderabad Responsibilities Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies. Required Skills Strong programming skills in Python, Scala, or Java. Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can contact us at +91 7305206696/ saranyadevib@talentien.com Skills: sql,data warehousing,aws,cloud,hadoop,scala,java,python,data engineering,azure,cloud technologies (aws, azure, gcp),etl processes,data modeling,nosql,pyspark,etl Show more Show less
Posted 5 days ago
15.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities Location : Mumbai Role Overview As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Preferred Education Master's Degree Required Technical And Professional Expertise Experience: 3–15 years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus Show more Show less
Posted 5 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Data Scientist Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team As consumer preference for digital payments continues to grow, ensuring a seamless and secure consumer experience is top of mind. Optimization Soltions team focuses on tracking of digital performance across all products and regions, understanding the factors influencing performance and the broader industry landscape. This includes delivering data-driven insights and business recommendations, engaging directly with key external stakeholders on implementing optimization solutions (new and existing), and partnering across the organization to drive alignment and ensure action is taken. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data-driven decision-making? Are you motivated to be part of a team that builds large-scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data science & analytics in the company? The Role Work closely with global optimization solutions team to architect, develop, and maintain advanced reporting and data visualization capabilities on large volumes of data to support data insights and analytical needs across products, markets, and services The candidate for this position will focus on Building solutions using Machine Learning and creating actionable insights to support product optimization and sales enablement. Prototype new algorithms, experiment, evaluate and deliver actionable insights. Drive the evolution of products with an impact focused on data science and engineering. Designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Perform data ingestion, aggregation, and processing on high volume and high dimensionality data to drive and enable data unification and produce relevant insights. Continuously innovate and determine new approaches, tools, techniques & technologies to solve business problems and generate business insights & recommendations. Apply knowledge of metrics, measurements, and benchmarking to complex and demanding solutions. All About You A superior academic record at a leading university in Computer Science, Data Science, Technology, mathematics, statistics, or a related field or equivalent work experience Experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis Strong analytical skills with track record of translating data into compelling insights Prior experience working in a product development role. knowledge of ML frameworks, libraries, data structures, data modeling, and software architecture. proficiency in using Python/Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi), and SQL to build Big Data products & platforms Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI is a plus. Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Ability to build a strong narrative on the business value of products and actively participate in sales enablement efforts. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250830 Show more Show less
Posted 5 days ago
0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
What is your Role? You will work in a multi-functional role with a combination of expertise in System and Hadoop administration. You will work in a team that often interacts with customers on various aspects related to technical support for deployed system. You will be deputed at customer premises to assist customers for issues related to System and Hadoop administration. You will Interact with QA and Engineering team to co-ordinate issue resolution within the promised SLA to customer. What will you do? Deploying and administering Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem. Installing Linux Operating System and Networking. Writing Unix SHELL/Ansible Scripting for automation. Maintaining core components such as Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE etc. Taking care of the day-to-day running of Hadoop clusters using Ambari/Cloudera manager/Other monitoring tools, ensuring that the Hadoop cluster is up and running all the time. Maintaining HBASE Clusters and capacity planning. Maintaining SOLR Cluster and capacity planning. Work closely with the database team, network team and application teams to make sure that all the big data applications are highly available and performing as expected. Manage KVM Virtualization environment. Show more Show less
Posted 5 days ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Profile: Sr. DW BI Developer Location: Sector 64, Noida (Work from Office) Position Overview: Working with the Finance Systems Manager, the role will ensure that ERP system is available and fit for purpose. The ERP Systems Developer will be developing the ERP system, providing comprehensive day-to-day support, training and develop the current ERP System for the future. Key Responsibilities: As a Sr. DW BI Developer, the candidate will participate in the design / development / customization and maintenance of software applications. As a DW BI Developer, the person should analyse the different applications/Products, design and implement DW using best practices. Rich data governance experience, data security, data quality, provenance / lineage. The candidate will also be maintaining a close working relationship with the other application stakeholders. Experience of developing secured and high-performance web application(s) Knowledge of software development life-cycle methodologies e.g. Iterative, Waterfall, Agile, etc. Designing and architecting future releases of the platform. Participating in troubleshooting application issues. Jointly working with other teams and partners handling different aspects of the platform creation. Tracking advancements in software development technologies and applying them judiciously in the solution roadmap. Ensuring all quality controls and processes are adhered to. Planning the major and minor releases of the solution. Ensuring robust configuration management. Working closely with the Engineering Manager on different aspects of product lifecycle management. Demonstrate the ability to independently work in a fast-paced environment requiring multitasking and efficient time management. Required Skills and Qualifications: End to end Lifecyle of Data warehousing, DataLakes and reporting Experience with Maintaining/Managing Data warehouses. Responsible for the design and development of a large, scaled-out, real-time, high performing Data Lake / Data Warehouse systems (including Big data and Cloud). Strong SQL and analytical skills. Experience in Power BI, Tableau, Qlikview, Qliksense etc. Experience in Microsoft Azure Services. Experience in developing and supporting ADF pipelines. Experience in Azure SQL Server/ Databricks / Azure Analysis Services Experience in developing tabular model. Experience in working with APIs. Minimum 2 years of experience in a similar role Experience with data warehousing, data modelling. Strong experience in SQL 2-6 years of total experience in building DW/BI systems Experience with ETL and working with large-scale datasets. Proficiency in writing and debugging complex SQLs. Prior experience working with global clients. Hands on experience with Kafka, Flink, Spark, SnowFlake, Airflow, nifi, Oozie, Pig, Hive,Impala Sqoop. Storage like HDFS , Object Storage (S3 etc), RDBMS, MPP and Nosql DB. Experience with distributed data management, data sfailover,luding databases (Relational, NoSQL, Big data, data analysis, data processing, data transformation, high availability, and scalability) Experience in end-to-end project implementation in Cloud (Azure / AWS / GCP) as a DW BI Developer Rich data governance experience, data security, data quality, provenance / lineagHive, Impalaerstanding of industry trends and products in dataops , continuous intelligence , Augmented analytics , and AI/ML. Prior experience of working in cloud like Azure, AWS and GCP Prior experience of working with Global Clients To know our Privacy Policy, please click on the link below or copy paste the URL on your browser: https://gedu.global/wp-content/uploads/2023/09/GEDU-Privacy-Policy-22092023-V2.0-1.pdf Show more Show less
Posted 5 days ago
5.0 - 8.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Key Responsibilities Lead the deployment, configuration, and ongoing administration of Hortonworks, Cloudera, and Apache Hadoop/Spark ecosystems. Maintain and monitor core components of the Hadoop ecosystem including Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, and HBASE. Take charge of the day-to-day running of Hadoop clusters using tools like Ambari, Cloudera Manager, or other monitoring tools, ensuring continuous availability and optimal performance. Manage and provide expertise in HBASE Clusters and SOLR Clusters, including capacity planning and performance tuning. Perform installation, configuration, and troubleshooting of Linux Operating Systems and network components relevant to big data environments. Develop and implement automation scripts using Unix SHELL/Ansible Scripting to streamline operational tasks and improve efficiency. Manage and maintain KVM Virtualization environments. Oversee clusters, storage solutions, backup strategies, and disaster recovery plans for big data infrastructure. Implement and manage comprehensive monitoring tools to proactively identify and address system anomalies and performance bottlenecks. Work closely with database teams, network teams, and application teams to ensure high availability and expected performance of all big data applications. Interact directly with customers at their premises to provide technical support and resolve issues related to System and Hadoop administration. Coordinate closely with internal QA and Engineering teams to facilitate issue resolution within promised Skills & Qualifications : Experience : 5-8 years of strong individual contributor experience as a DevOps, System, and/or Hadoop Domain Expertise : Proficient in Linux Administration. Extensive experience with Hadoop Infrastructure and Administration. Strong knowledge and experience with SOLR. Proficiency in Configuration Management tools, particularly Data Ecosystem Components : Must have hands-on experience and strong knowledge of managing and maintaining : Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem deployments. Core components like Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE. Cluster management tools such as Ambari and Cloudera : Strong scripting skills in one or more of Perl, Python, or Management : Strong experience working with clusters, storage solutions, backup strategies, database management systems, monitoring tools, and disaster recovery : Experience managing KVM Virtualization : Excellent analytical and problem-solving skills, with a methodical approach to debugging complex : Strong communication skills (verbal and written) with the ability to interact effectively with technical teams and : Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field, or equivalent relevant work experience. (ref:hirist.tech) Show more Show less
Posted 6 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Job The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Role And Responsibilities Strong technical, analytical, and problem-solving skills Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical Skills Requirements The candidate must demonstrate proficiency in data processing and extraction Ability to own and deliver on large, multi-faceted projects Fluency in complex SQL and experience with RDBMSs Project Experience in Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems Experience working on any Bigquery would be added advantage Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Exposure to various ETL and Business Intelligence tools Experience in shell scripting to automate pipeline execution. Solid grounding in Agile methodologies Experience with git and other source control systems Strong communication and presentation skills Nice-to-have Skills Experience in GTM, GA4 and Firebase Bigquery certification Unix or Shell scripting Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment API development Qualifications B.Tech./M.Tech./MS or BCA/MCA degree from a reputed university Looking for only 30 days' notice period or less than that (ref:hirist.tech) Show more Show less
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Key Responsibilities : Design and develop interactive dashboards, reports, and visualizations using Power BI to drive critical business insights. Write complex SQL queries, stored procedures, and functions to effectively extract, transform, and load (ETL) data from various sources. Optimize and maintain SQL databases, ensuring data integrity, performance, and reliability. Develop robust data models and implement sophisticated DAX calculations in Power BI for advanced analytics. Integrate Power BI with diverse data sources, including various databases, cloud storage solutions, and APIs. Work closely with business stakeholders to meticulously gather requirements and translate them into actionable Business Intelligence solutions. Troubleshoot performance issues related to Power BI dashboards and SQL queries, ensuring optimal system performance. Stay updated with the latest trends and advancements in Power BI, SQL, and the broader field of data analytics. All About You Hands-on experience managing technology projects with demonstrated ability to understand complex data and technology initiatives Ability to lead and influence others to advance deliverables Understanding of emerging technologies including but not limited to, cloud architecture, machine learning/AI and Big Data infrastructure Data architecture experience and experience in building data models. Experience deploying and working with big data technologies like Hadoop, Spark, and Sqoop. Experience with streaming frameworks like Kafka and Axon and pipelines like Nifi, Proficient in OO programming (Python Java/Springboot/J2EE, and Scala) Experience with the Hadoop Ecosystem (HDFS, Yarn, MapReduce, Spark, Hive, Impala), Experience with Linux, Unix command line, Unix Shell Scripting, SQL and any Scripting language Experience with data visualization tools such as Tableau, Domo, and/or PowerBI is a plus. Experience presenting data findings in a readable and insight driven format. Experience building support decks. (ref:hirist.tech) Show more Show less
Posted 6 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skills: Python, Spark, Data Engineer, Cloudera, Onpremise, Azure, Snlowfow, Kafka, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title: Lead Data Engineer Location: Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skills: Python, Apache Spark, Snowflake, data engineer, spark, kafka, azure, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less
Posted 6 days ago
6.0 - 9.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Details EY’s GDS Assurance Digital team’s mission is to develop, implement and integrate technology solutions that better serve our audit clients and engagement teams. As a member of EY’s core Assurance practice, you’ll develop a deep Audit related technical knowledge and outstanding database, data analytics and programming skills. Ever-increasing regulations require audit departments to gather, organize and analyse more data than ever before. Often the data necessary to satisfy these ever-increasing and complex regulations must be collected from a variety of systems and departments throughout an organization. Effectively and efficiently handling the variety and volume of data is often extremely challenging and time consuming for a company. EY's GDS Assurance Digital team members work side-by-side with the firm's partners, clients and audit technical subject matter experts to develop and incorporate technology solutions that enhance value-add, improve efficiencies and enable our clients with disruptive and market leading tools supporting Assurance. GDS Assurance Digital provides solution architecture, application development, testing and maintenance support to the global Assurance service line both on a pro-active basis and in response to specific requests. EY is currently seeking a Big Data Developer to join the GDS Assurance Digital practice in Bangalore, India, to work on various Microsoft technology-based projects for customers across the globe. Requirements (including Experience, Skills And Additional Qualifications) A Bachelor's degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance or a related field with adequate industry experience. BE/BTech/MCA with a sound industry experience of 6 to 9 years. Technical skills requirements: Experience with SQL, NoSQL databases such as HBase/Cassandra/MongoDB Good knowledge of Big Data querying tools, such as Pig, Hive ETL Implementation any tool like Alteryx or Azure Data Factory etc Good to have experience in NiFi Experience in any one of the reporting tool like Power BI/Tableau/Spot fire is must Analytical/Decision Making Responsibilities: An ability to quickly understand complex concepts and use technology to support data modeling, analysis, visualization or process automation Selects appropriately from applicable standards, methods, tools and applications and uses accordingly Ability to work within a multi-disciplinary team structure, but also independently Demonstrates analytical and systematic approach to problem solving Communicates fluently orally and in writing and can present complex technical information to both technical and non-technical audiences Able to plan, schedule and monitor work activities in to meet time and quality targets Able to absorb rapidly new technical information, business acumen, and apply it effectively Ability to work in a team environment with strong customer focus, good listening, negotiation and problem-resolution skills Additional skills requirements: The expectations are that a Senior will be able to maintain long-term client relationships and network and cultivate business development opportunities Should have understanding and experience of software development best practices Must be a team player EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Details Position: Senior Lead Engineer Experience: 7-10 Years Work Mode: Onsite Location: Pune Skills Java 17+ Spring boot SQL API Team Lead Key Responsibilities As senior lead, build and mentor engineering teams and deliver on below : Technical Expertise: Provide guidance on technical and functional aspects of project and take decisions. You will setup and execute best practices in application architecture, development, code reviews, performance, deployment and execution by owning end-to-end business deliveries Innovation and continuous learning : Build a culture of innovation and continuous improvement in the team. Encourage and adopt new technology and emerging industry trends and methods for improving efficiency of your team Communication and Collaboration : Facilitate communication within the engineering teams and stakeholders including quality, operations, product and program to ensure alignment for business goals. Address technical issues in teams and promote positive working environment. Team Management : Conduct performance review and setup constructive feedback loop for team members along with Engineering Managers. Project Management : Participate in the overall planning by providing correct estimation and execution of engineering projects and ensure their timely delivery. Application layer technologies including Tomcat/Nodejs, Netty, Springboot, hibernate, Elasticsearch, Kafka, Apache flink. Caching technologies like Redis, Aerospike or Hazelcast. Frontend technologies including ReactJs, Angular, Android/IOS. Data storage technologies like Oracle, S3, Postgres, Mongodb. Tooling including Git, Command line, Jenkins, Nifi, Airflow, Jmeter, Postman, Gatling, Ngnix/Haproxy, Jira/Confluence, Grafana, K8s. Expertise And Qualifications Educational Qualifications. Bachelor's degree in computer science, Computer Engineering or comparable experience. Work Experience. 7+ years of hands-on experience in software development. Strong ownership & go-getter attitude – proactive in solving problems. Ability to thrive in a fast-paced, changing environment. Technical acumen – experience working with tech products and platforms. Skills: angular,airflow,postgres,android,frontend technologies,caching technologies,oracle,s3,reactjs,team lead,java 17+,spring boot,mongodb,sql,api,java,git,tooling,application layer technologies,ios,data storage technologies Show more Show less
Posted 6 days ago
6.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Infrastructure/Cloud Main location: India, Karnataka, Bangalore Position ID: J0525-0747 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: DevOps Engineer Position: Senior Systems Engineer/Lead Analyst Experience: 6 to 9 Years Category: Software Development/ Engineering Main location: Bangalore Position ID: J0525-0747 Employment Type: Full Time JD: The Skills required Candidate must possess: Proficiency in Python programming, including writing clean and efficient code. Experience with frameworks like FastAPI for building microservices & RESTful APIs, Pytest for Unit Testing automation. Understanding of core AWS services like EC2, S3, Lambda, and RDS. Knowledge of AWS security best practices, including VPC, security groups, and IAM Knowledge of Kubernetes concepts (pods, services, deployments, namespaces, clusters, scaling, monitoring) and YAML files. Experience with Apache NiFi for automating data flows between systems. Ability to configure and manage NiFi processors for data ingestion and transformation. Experience with continuous integration and continuous deployment (CI/CD) pipelines using DevOps tools like Jenkins, Git, Kompass. Knowledge of managing relational databases on AWS RDS, Proficiency in SQL for querying and managing data, and performance tuning. Experience in executing projects in an Agile environment. The skills that are good to have: Knowledge on Oracle Application R12. Experience in Oracle PL/SQL for writing and debugging stored procedures, functions, and triggers. Oracle SOA Suite for building, deploying, and managing service-oriented architectures. Experience with BPEL (Business Process Execution Language) for orchestrating business processes Skills: DevOps Kubernetes DevOps Engineering Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 week ago
0 years
5 - 5 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Software engineer In this role, you will: Full stack Engineer works with minimal supervision to work on end-to-end application design, development and maintenance activities. Work in an agile manner and can change priorities depending on the criticalities. Works with the rest of the team members to have seamless integration. Takes ownership and completes till the features are deployed. Requirements To be successful in this role, you should meet the following requirements: Proficiency in Nifi & Scripting knowledge in Python or groovy Java and Spring boot framework. PostgreSQL & Knowledge of Microservices. GitHub, Jenkins, Ansible etc. Understanding of CI/CD concept. Good Technical Design, Problem Solving and debugging skills Good communication skills and should be able to take ownership Any cloud platform such as GCP, AWS. JIRA automation and confluence Experience in ETL, Ingestion or similar experience is preferred. Angular/React framework for UI development. GitHub co pilot You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 1 week ago
4.0 - 5.0 years
0 Lacs
Hyderābād
On-site
Job Information Date Opened 05/23/2025 Industry Information Technology Job Type Full time Work Experience 4-5 years City Hyderabad State/Province Telangana Country India Zip/Postal Code 500059 Job Description KMC is seeking a motivated and adaptable NiFi/Astro/ETL Engineer with 3-4 years of experience in ETL workflows, data integration, and data pipeline management. The ideal candidate will thrive in an operational setting, collaborate well with team members, and demonstrate a readiness to learn and embrace new technologies. This role will focus on the development, maintenance, and support of ETL processes to ensure efficient data workflows and high-quality deliverables. Roles and Responsibilities: Design, implement, and maintain ETL workflows using Apache NiFi, Astro, and other relevant tools. Support data extraction, transformation, and loading (ETL) processes to ensure efficient data flow across systems. Collaborate with data teams to ensure seamless integration of data from various sources, supporting data consistency and availability.Configure and manage data ingestion processes from both structured and unstructured data sources. Monitor ETL processes and data pipelines, troubleshoot and resolve issues in real-time to ensure data accuracy and availability. Provide on-call support as necessary to maintain smooth data operations.Work closely with cross-functional teams to gather requirements, refine workflows, and ensure optimal data solutions. Contribute actively to team discussions, solution planning, and provide input for continuous improvement. Stay updated with industry trends and emerging technologies in data integration and ETL practices. Show willingness to learn and adapt to new tools and methodologies as required by project or team needs. Requirements 3-4 years of experience in ETL workflows, specifically with Apache NiFi and Astro (or similar platforms). Proficient in SQL and experience with data warehousing concepts. Familiarity with scripting languages (e.g., Python, Shell scripting) is a plus. Basic understanding of cloud platforms (AWS, Azure, or Google Cloud) Soft Skills: Strong problem-solving abilities with an operational mindset. Team player with effective communication skills to collaborate well within and across teams. Quick learner, adaptable to new tools, and willing to take on challenges with a positive attitude. Benefits Insurance - Family Term Insurance PF Paid Time Off - 20 days Holidays - 10 days Flexi timing Competitive Salary Diverse & Inclusive workspace
Posted 1 week ago
0 years
3 - 5 Lacs
Pune
On-site
Pune About Us We empower enterprises globally through intelligent, creative, and insightful services for data integration, data analytics and data visualization. Hoonartek is a leader in enterprise transformation, data engineering and an acknowledged world-class Ab Initio delivery partner. Using centuries of cumulative experience, research and leadership, we help our clients eliminate the complexities & risk of legacy modernization and safely deliver big data hubs, operational data integration, business intelligence, risk & compliance solutions and traditional data warehouses & marts. At Hoonartek, we work to ensure that our customers, partners and employees all benefit from our unstinting commitment to delivery, quality and value. Hoonartek is increasingly the choice for customers seeking a trusted partner of vision, value and integrity How We Work? Define, Design and Deliver (D3) is our in-house delivery philosophy. It’s culled from agile and rapid methodologies and focused on ‘just enough design’. We embrace this philosophy in everything we do, leading to numerous client success stories and indeed to our own success. We embrace change, empowering and trusting our people and building long and valuable relationships with our employees, our customers and our partners. We work flexibly, even adopting traditional/waterfall methods where circumstances demand it. At Hoonartek, the focus is always on delivery and value. Job Description We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes—ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • • Ensure platform uptime and application health as per SLOs/KPIs • • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • • Debug and resolve complex production issues, performing root cause analysis • • Automate routine tasks and implement self-healing systems • • Design and maintain dashboards, alerts, and operational playbooks • • Participate in incident management, problem resolution, and RCA documentation • • Own and update SOPs for repeatable processes • • Collaborate with L3 and Product teams for deeper issue resolution • • Support and guide L1 operations team • • Conduct periodic system maintenance and performance tuning • • Respond to user data requests and ensure timely resolution • • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • • Strong Linux fundamentals and scripting (Python, Shell) • • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • • Strong SQL skills (Oracle/Exadata preferred) • • Familiarity with DataHub, DataMesh, and security best practices is a plus SHIFT - 24/7
Posted 1 week ago
0 years
3 - 10 Lacs
Bengaluru
On-site
Employment Type Permanent Closing Date 13 June 2025 11:59pm Job Title IT Domain Specialist Job Summary As the IT Domain Specialist, your role is key in improving the stability and reliability of our cloud offerings and solutions to ensure continuity of service for our customers. You will be responsible for supporting the end-to-end development of key cloud platform and solutions which includes technical design, integration requirements, delivery and lifecycle management. You are a specialist across and/or within a technology domain and viewed as the go-to person in the business to provide technical support in the development and delivery of cloud infrastructure platforms and solutions. Job Description Who We Are Telstra is Australia’s leading telecommunications and technology company spanning over a century with a footprint in over 20+ countries. In India, we’re building a platform for innovative delivery and engagement that will strengthen our position as an industry leader. We’ve grown quickly since our inception in 2019, now with offices in Pune, Hyderabad and Bangalore. Focus of the Role Event Data Engineer role is to plan, coordinate, and execute all activities related to the requirements interpretation, design and implementation of Business intelligence capability. This individual will apply proven industry and technology experience as well as communication skills, problem-solving skills, and knowledge of best practices to issues related to design, development, and deployment of mission-critical business systems with a focus on quality application development and delivery. What We Offer Performance-related pay Access to thousands of learning programs so you can level-up Global presence across 22 countries; opportunities to work where we do business. Up to 26 weeks maternity leave provided to the birth mother with benefits for all child births Employees are entitled to 12 paid holidays per calendar year Eligible employees are entitled to 12 days of paid sick / casual leave per calendar year Relocation support options across India, from junior to senior positions within the company Receive insurance benefits such as medical, accidental and life insurances What You’ll Do Experience in Analysis, Design, and Development in the fields of Business Intelligence, Databases and Web-based Applications. Experience in NiFi, Kafka, Spark, and Cloudera Platforms design and development. Experience in Alteryx Workflow development and Data Visualization development using Tableau to create complex, intuitive dashboards. In-depth understanding and experience in Cloudera framework includes CDP (Cloudera Data Platform). Experience in Cloudera manager to monitor Hadoop cluster and critical services . Hadoop administration ( Hive, Kafka, zookeeper etc.). Experience in data management including data integration, modeling, optimization and data quality. Strong knowledge in writing SQL and database management. Working experience in tools like Alteryx , KNIME will be added advantage. Implementing Data security and access control compliant to Telstra Security Standards Ability to review vendor designs and recommended solutions based on industry best practises Understand overall business operations and develops innovative solutions to help improve productivity Ability to understand and design provisioning solutions at Telstra and how Data lakes Monitor process of software configuration/development/testing to assure quality deliverable. Ensure standards of QA are being met Review deliverables to verify that they meet client and contract expectations; Implement and enforce high standards for quality deliverables Analyses performance and capacity issues of the highest complexity with Data applications. Assists leadership with development and management of new application capabilities to improve productivity Provide training and educate other team members around core capabilities and helps them deliver high quality solutions and deliverables/documentation Self-Motivator to perform Design / Develop user requirements, test and deploy the changes into production. About You Experience in data flow development and Data Visualization development to create complex, intuitive dashboards. Experience with Hortonworks Data Flow (HDF) this includes NiFi and Kafka experience with Cloudera Edge Big Data & Data Lake Experience Cloudera Hadoop with project implementation experience Data Analytics experience Data Analyst and Data Science exposure Exposure to various data management architectures like data warehouse, data lake and data hub, and supporting processes like data integration, data modeling. Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using data integration technologies Experience in supporting operations and knowledge of standard operation procedures: OS Patches, Security Scan, Log Onboarding, Agent Onboarding, Log Extraction etc. Development and deployment and scaling of containerised applications with Docker preferred. A good understanding of enterprise application integration, including SOA, ESB, EAI, ETL environments and an understanding of integration considerations such as process orchestration, customer data integration and master data management A good understanding of the security processes, standards & issues involved in multi-tier, multi-tenant web applications We're amongst the top 2% of companies globally in the CDP Global Climate Change Index 2023, being awarded an 'A' rating. If you want to work for a company that cares about sustainability, we want to hear from you. As part of your application with Telstra, you may receive communications from us on +61 440 135 548 (for job applications in Australia) and +1 (623) 400-7726 (for job applications in the Philippines and India). When you join our team, you become part of a welcoming and inclusive community where everyone is respected, valued and celebrated. We actively seek individuals from various backgrounds, ethnicities, genders and disabilities because we know that diversity not only strengthens our team but also enriches our work. We have zero tolerance for harassment of any kind, and we prioritise creating a workplace culture where everyone is safe and can thrive. As part of the hiring process, all identified candidates will undergo a background check, and the results will play a role in the final decision regarding your application. We work flexibly at Telstra. Talk to us about what flexibility means to you. When you apply, you can share your pronouns and / or any reasonable adjustments needed to take part equitably during the recruitment process. We are aware of current limitations with our website accessibility and are working towards improving this. Should you experience any issues accessing information or the application form, and require this in an alternate format, please contact our Talent Acquisition team on DisabilityandAccessibility@team.telstra.com.
Posted 1 week ago
0 years
3 - 10 Lacs
Bengaluru
On-site
Employment Type Permanent Closing Date 13 June 2025 11:59pm Job Title IT Domain Specialist Job Summary As the IT Domain Specialist, your role is key in improving the stability and reliability of our cloud offerings and solutions to ensure continuity of service for our customers. You will be responsible for supporting the end-to-end development of key cloud platform and solutions which includes technical design, integration requirements, delivery and lifecycle management. You are a specialist across and/or within a technology domain and viewed as the go-to person in the business to provide technical support in the development and delivery of cloud infrastructure platforms and solutions. Job Description Who We Are Telstra is Australia’s leading telecommunications and technology company spanning over a century with a footprint in over 20+ countries. In India, we’re building a platform for innovative delivery and engagement that will strengthen our position as an industry leader. We’ve grown quickly since our inception in 2019, now with offices in Pune, Hyderabad and Bangalore. Focus of the Role Event Data Engineer role is to plan, coordinate, and execute all activities related to the requirements interpretation, design and implementation of Business intelligence capability. This individual will apply proven industry and technology experience as well as communication skills, problem-solving skills, and knowledge of best practices to issues related to design, development, and deployment of mission-critical business systems with a focus on quality application development and delivery. What We Offer Performance-related pay Access to thousands of learning programs so you can level-up Global presence across 22 countries; opportunities to work where we do business. Up to 26 weeks maternity leave provided to the birth mother with benefits for all child births Employees are entitled to 12 paid holidays per calendar year Eligible employees are entitled to 12 days of paid sick / casual leave per calendar year Relocation support options across India, from junior to senior positions within the company Receive insurance benefits such as medical, accidental and life insurances What You’ll Do Experience in Analysis, Design, and Development in the fields of Business Intelligence, Databases and Web-based Applications. Experience in NiFi, Kafka, Spark, and Cloudera Platforms design and development. Experience in Alteryx Workflow development and Data Visualization development using Tableau to create complex, intuitive dashboards. In-depth understanding and experience in Cloudera framework includes CDP (Cloudera Data Platform). Experience in Cloudera manager to monitor Hadoop cluster and critical services . Hadoop administration ( Hive, Kafka, zookeeper etc.). Experience in data management including data integration, modeling, optimization and data quality. Strong knowledge in writing SQL and database management. Working experience in tools like Alteryx , KNIME will be added advantage. Implementing Data security and access control compliant to Telstra Security Standards Ability to review vendor designs and recommended solutions based on industry best practises Understand overall business operations and develops innovative solutions to help improve productivity Ability to understand and design provisioning solutions at Telstra and how Data lakes Monitor process of software configuration/development/testing to assure quality deliverable. Ensure standards of QA are being met Review deliverables to verify that they meet client and contract expectations; Implement and enforce high standards for quality deliverables Analyses performance and capacity issues of the highest complexity with Data applications. Assists leadership with development and management of new application capabilities to improve productivity Provide training and educate other team members around core capabilities and helps them deliver high quality solutions and deliverables/documentation Self-Motivator to perform Design / Develop user requirements, test and deploy the changes into production. About You Experience in data flow development and Data Visualization development to create complex, intuitive dashboards. Experience with Hortonworks Data Flow (HDF) this includes NiFi and Kafka experience with Cloudera Edge Big Data & Data Lake Experience Cloudera Hadoop with project implementation experience Data Analytics experience Data Analyst and Data Science exposure Exposure to various data management architectures like data warehouse, data lake and data hub, and supporting processes like data integration, data modeling. Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using data integration technologies Experience in supporting operations and knowledge of standard operation procedures: OS Patches, Security Scan, Log Onboarding, Agent Onboarding, Log Extraction etc. Development and deployment and scaling of containerised applications with Docker preferred. A good understanding of enterprise application integration, including SOA, ESB, EAI, ETL environments and an understanding of integration considerations such as process orchestration, customer data integration and master data management A good understanding of the security processes, standards & issues involved in multi-tier, multi-tenant web applications We're amongst the top 2% of companies globally in the CDP Global Climate Change Index 2023, being awarded an 'A' rating. If you want to work for a company that cares about sustainability, we want to hear from you. As part of your application with Telstra, you may receive communications from us on +61 440 135 548 (for job applications in Australia) and +1 (623) 400-7726 (for job applications in the Philippines and India). When you join our team, you become part of a welcoming and inclusive community where everyone is respected, valued and celebrated. We actively seek individuals from various backgrounds, ethnicities, genders and disabilities because we know that diversity not only strengthens our team but also enriches our work. We have zero tolerance for harassment of any kind, and we prioritise creating a workplace culture where everyone is safe and can thrive. As part of the hiring process, all identified candidates will undergo a background check, and the results will play a role in the final decision regarding your application. We work flexibly at Telstra. Talk to us about what flexibility means to you. When you apply, you can share your pronouns and / or any reasonable adjustments needed to take part equitably during the recruitment process. We are aware of current limitations with our website accessibility and are working towards improving this. Should you experience any issues accessing information or the application form, and require this in an alternate format, please contact our Talent Acquisition team on DisabilityandAccessibility@team.telstra.com.
Posted 1 week ago
4.0 years
10 - 17 Lacs
India
On-site
We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2