Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 8 years
5 - 10 Lacs
Noida
Work from Office
About The Role : Role Purpose The purpose of this role is to interpret data and turn into information (reports, dashboards, interactive visualizations etc) which can offer ways to improve a business, thus affecting business decisions. Do 1. Managing the technical scope of the project in line with the requirements at all stages a. Gather information from various sources (data warehouses, database, data integration and modelling) and interpret patterns and trends b. Develop record management process and policies c. Build and maintain relationships at all levels within the client base and understand their requirements. d. Providing sales data, proposals, data insights and account reviews to the client base e. Identify areas to increase efficiency and automation of processes f. Set up and maintain automated data processes g. Identify, evaluate and implement external services and tools to support data validation and cleansing. h. Produce and track key performance indicators 2. Analyze the data sets and provide adequate information a. Liaise with internal and external clients to fully understand data content b. Design and carry out surveys and analyze survey data as per the customer requirement c. Analyze and interpret complex data sets relating to customers business and prepare reports for internal and external audiences using business analytics reporting tools d. Create data dashboards, graphs and visualization to showcase business performance and also provide sector and competitor benchmarking e. Mine and analyze large datasets, draw valid inferences and present them successfully to management using a reporting tool f. Develop predictive models and share insights with the clients as per their requirement Deliver NoPerformance ParameterMeasure1.Analyses data sets and provide relevant information to the clientNo. Of automation done, On-Time Delivery, CSAT score, Zero customer escalation, data accuracy
Posted 3 months ago
8 - 13 years
45 - 50 Lacs
Bengaluru
Work from Office
We are looking for Versatile, curious, and energetic Software Engineers who embrace solving complex challenges on a global scale. As a Visa Software Engineer, you will be an integral part of a multi-functional development team inventing, designing, building, and testing software products that reach a truly global customer base. While building components of powerful payment technology, you will get to see your efforts shaping the digital future of monetary transactions. The Work itself: Design code and systems that touch 40% of the world population while influencing Visa s internal standards for scalability, security, and reusability Collaborate multi-functionally to create design artifacts and develop best-in-class software solutions for multiple Visa technical offerings Actively contribute to product quality improvements, valuable service technology, and new business flows in diverse agile squads Develop robust and scalable products intended for a myriad of customers including end-user merchants, b2b, and business to government solutions. Leverage innovative technologies to build the next generation of Payment Services, Transaction Platforms, Real-Time Payments, and Buy Now Pay Later Technology Opportunities to make a difference on a global or local scale through mentorship and continued learning opportunities Essential Functions: Works directly with product owners to gather and refine requirements across products, adding and taking into account existing tools and solutions across the organization. Develops and designs advanced architect solutions that are robust and scalable, considering integrations with other solutions across the internal technical ecosystem. Provides domain expertise on the development of technical documentation of solutions and contributes to standard processes in technical documentation as needed. Plays a key role in the development and delivery of new features across products from end-to-end. The Skills You Bring : Energy and Experience : A growth mindset that is curious and passionate about technologies and enjoys challenging projects on a global scale Challenge the Status Quo : Comfort in pushing the boundaries, hacking beyond traditional solutions Language Expertise : Expertise in one or more general development languages (eg, Java, C#, C++) Builder : Experience building and deploying modern services and web applications with quality and scalability Learner : Constant drive to learn new technologies such as Angular, React, Kubernetes, Docker, etc Partnership : Experience collaborating with Product, Test, Dev-ops, and Agile/Scrum teams Basic Qualifications 8+ years of relevant work experience with a Bachelor s Degree or at least 5 years of experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or 2 years of work experience with a PhD, OR 11+ years of relevant work experience. Preferred Qualifications 9 or more years of relevant work experience with a Bachelor Degree or 7 or more relevant years of experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or 3 or more years of experience with a PhD Hands on experience working as a Hadoop SRE in managing Hadoop platforms. Experience in building, managing and tuning performance of Hadoop platforms. Extensive knowledge on Hadoop eco-system such as Zookeeper, HDFS, Yarn, HIVE and SPARK. Excellent Python programming skills for automation requirement for repetitive dev-ops tasks Person will be responsible to perform SRE and Engineering activities on Bigdata Hadoop and HBASE platforms Understanding of security tools like Kerberos and Ranger. Experience on Hortonworks distribution or Open Source are preferred. Hands-on experience in debugging Hadoop issues both on platform and applications. understanding of Linux, networking, CPU, memory and storage. Knowledge on Java and Python is good to have. Excellent interpersonal, verbal, and written communication skills. This position is not ideal for a Hadoop developer.
Posted 3 months ago
3 - 5 years
9 - 14 Lacs
Bengaluru
Work from Office
Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification : Bachelors or Masters in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level : At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience : Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL : TSQL/Spark SQL/HiveQL Storage : Data Lake and Big Data Storage Design Additionally it is helpful to know basics of: Data Pipelines : ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL : Cosmos, Mongo, Cassandra Cubes : SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server : TSQL, Stored Procedures Hadoop : HDInsight / MapReduce / HDFS / YARN / Oozie / Hive / HBase / Ambari / Ranger / Atlas / Kafka Data Catalog : Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities : Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment.
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 3 months ago
6 - 8 years
8 - 15 Lacs
Mysore
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
2 - 4 years
4 - 6 Lacs
Kochi
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR.. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 3 months ago
6 - 11 years
14 - 17 Lacs
Mysore
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Hyderabad
Work from Office
Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Kochi
Work from Office
Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Demonstrated ability in designing and building for data ingestion, data cleansing, ETL, loading data layers and exposing data for consumers. Experience of using DevOps and working in Agile environments Collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. Proven interpersonal skills while contributing to team effort by accomplishing related results as required Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Kochi
Work from Office
Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Kochi
Work from Office
Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Kochi
Work from Office
Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Kochi
Work from Office
Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Hyderabad
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary:As a Software Development Engineer, you will be responsible for analyzing, designing, coding, and testing multiple components of application code using PySpark. Your typical day will involve performing maintenance, enhancements, and/or development work for one or more clients in Chennai. Roles & Responsibilities:- Design, develop, and maintain PySpark applications for one or more clients.- Analyze and troubleshoot complex issues in PySpark applications and provide solutions.- Collaborate with cross-functional teams to ensure timely delivery of high-quality software solutions.- Participate in code reviews and ensure adherence to coding standards and best practices.- Stay updated with the latest advancements in PySpark and related technologies. Professional & Technical Skills:- Must To Have Skills:Strong experience in PySpark.- Good To Have Skills:Experience in Big Data technologies such as Hadoop, Hive, and HBase.- Experience in designing and developing distributed systems using PySpark.- Strong understanding of data structures, algorithms, and software design principles.- Experience in working with SQL and NoSQL databases.- Experience in working with version control systems such as Git. Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality software solutions.- This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. - Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualifications Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : Hadoop Administration Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary:As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with the team to understand the project requirements, designing and developing software solutions, and ensuring the applications are aligned with the business needs. You will also be responsible for troubleshooting and resolving any application issues that arise, as well as providing technical support to end-users. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Design and develop software applications based on business requirements.- Collaborate with the team to understand project requirements and provide technical expertise.- Troubleshoot and resolve any application issues that arise.- Provide technical support to end-users.- Conduct code reviews and ensure adherence to coding standards.- Stay updated with the latest industry trends and technologies. Professional & Technical Skills:- Must To Have Skills:Proficiency in Python (Programming Language).- Good To Have Skills:Experience with Hadoop Administration.- Strong understanding of software development principles and best practices.- Experience with designing and developing applications using Python.- Knowledge of database management systems and SQL.- Familiarity with version control systems such as Git.- Experience with agile development methodologies.- Excellent problem-solving and analytical skills. Additional Information:- The candidate should have a minimum of 3 years of experience in Python (Programming Language).- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualifications 15 years full time education
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Noida
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support the organization's data needs. About The Role ::Responsibilities:1.Develop, optimize, and maintain large-scale data processing pipelines using Apache Spark and Python or PySpark.2.Write and maintain efficient, reusable, and reliable code to ensure the best possible performance and quality.3.Collaborate with data engineers, data scientists, and other stakeholders to design and implement robust data solutions.4.Perform data extraction, transformation, and loading (ETL) operations on large datasets.5.Troubleshoot and resolve issues related to data processing and pipeline performance.6.Implement best practices for data processing, including data validation, error handling, and logging.7.Work with cloud-based data storage and processing platforms such as AWS, Azure, or Google Cloud.8.Stay up to date with the latest industry trends and technologies to ensure the team is using the best tools and techniques available. Qualifications:1.Bachelors degree in computer science, Information Technology, or a related field (or equivalent experience).2.Proven experience as a PySpark Developer or in a similar role.3.Strong proficiency in Python/pyspark programming and experience with Apache Spark.4.Solid understanding of data processing concepts and ETL pipelines.5.Experience with any cloud platforms such as AWS, Azure, or Google Cloud.6.Proficiency in SQL and experience with relational and NoSQL databases.7.Experience with version control systems like Git.8.Familiarity with any big data tools and frameworks (e.g., Hadoop, Kafka).9.Strong problem-solving skills and attention to detail.10.Excellent communication and teamwork skills.11.Familiarity with Agile development methodologies Qualifications 15 years full time education
Posted 3 months ago
6 - 10 years
8 - 12 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
6 - 10 years
10 - 12 Lacs
Mysore
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
2 - 5 years
14 - 17 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
5 - 9 years
7 - 11 Lacs
Pune
Work from Office
Candidate should have experience on Pyspark with SSIS, SQL, Data Warehouse, Database, Data Visualization, Cloud Architecture Candidate should have ETL Knowledge Minimum 3 years of experience in build & deployment of Bigdata applications using SparkSQL, Spark Streaming in Python. Expertise on graph algorithms and advanced recursion techniques. Minimum 5 years of extensive experience in design, build and deployment of Python-based applications. Minimum 3 years of experience in the following: HIVE, YARN, Kafka, HBase, MongoDB. Hands-on experience in generating/parsing XML, JSON documents, and REST API request/responses. Bachelors degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 5 years of experience. Expertise in handling complex large-scale Big Data environments preferably (20Tb+); Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities.
Posted 3 months ago
6 - 11 years
8 - 12 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back-end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
2 - 4 years
12 - 13 Lacs
Chennai
Work from Office
Design, develop, and maintain large-scale distributed data processing systems using Hadoop, HDFS, MapReduce, and Spark. Build scalable data pipelines for batch and real-time processing using PySpark and Spark Streaming. Work with HBase to manage structured and semi-structured data storage, ensuring efficient querying and data integrity. Optimize and troubleshoot performance issues in distributed systems to ensurehigh availability and reliability. Write clean, maintainable, and efficient code using Python for data processing and automation tasks. Collaborate with data scientists, analysts, and other engineering teams to understand data requirements and deliver effective solutions. Implement best practices for data governance, security, and compliance. Monitor, troubleshoot, and improve ETL processes and workflows. Stay updated with emerging trends and technologies in big data and analytics. Desired Profile: Bachelors or Master s degree in Computer Science, Data Engineering, or a related field. 2+ years of hands-on experience with big data technologies like Hadoop, HDFS, HBase, and MapReduce. Strong programming skills in Python. Experience with designing and managing scalable, distributed data architectures. Hands-on experience with data integration, ETL development, and workflow orchestration tools. Solid understanding of database systems and data modeling Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2