Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2 - 7 years
4 - 9 Lacs
Andhra Pradesh
Work from Office
JD -7+ years of hands on experience in Python especially dealing with Pandas and Numpy Good hands-on experience in Spark PySpark and Spark SQL Hands on experience in Databricks Unity Catalog Delta Lake Lake house Platform Medallion Architecture Azure Data Factory ADLS Experience in dealing with Parquet and JSON file format Knowledge in Snowflake.
Posted 2 months ago
3 - 8 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using PySpark. Your typical day will involve collaborating with cross-functional teams, developing and deploying PySpark applications, and acting as the primary point of contact for the project. Roles & Responsibilities: Lead the effort to design, build, and configure PySpark applications, collaborating with cross-functional teams to ensure project success. Develop and deploy PySpark applications, ensuring adherence to best practices and standards. Act as the primary point of contact for the project, communicating effectively with stakeholders and providing regular updates on project progress. Provide technical guidance and mentorship to junior team members, ensuring their continued growth and development. Stay updated with the latest advancements in PySpark and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience with Hadoop, Hive, and other Big Data technologies. Solid understanding of software development principles and best practices. Experience with Agile development methodologies. Strong problem-solving and analytical skills. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 2 months ago
5 - 9 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : BE Summary :As a Databricks Unified Data Analytics Platform Application Lead, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with the Databricks Unified Data Analytics Platform, collaborating with cross-functional teams, and ensuring the successful delivery of applications. Roles & Responsibilities: Lead the design, development, and deployment of applications using the Databricks Unified Data Analytics Platform. Act as the primary point of contact for all application-related activities, collaborating with cross-functional teams to ensure successful delivery. Ensure the quality and integrity of applications through rigorous testing and debugging. Provide technical guidance and mentorship to junior team members, fostering a culture of continuous learning and improvement. Professional & Technical Skills: Must To Have Skills:Expertise in the Databricks Unified Data Analytics Platform. Good To Have Skills:Experience with other big data technologies such as Hadoop, Spark, and Kafka. Strong understanding of software engineering principles and best practices. Experience with agile development methodologies and tools such as JIRA and Confluence. Proficiency in programming languages such as Python, Java, or Scala. Additional Information: The candidate should have a minimum of 5 years of experience in the Databricks Unified Data Analytics Platform. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Chennai office. Qualifications BE
Posted 2 months ago
7 - 8 years
15 - 25 Lacs
Chennai
Work from Office
Assistant Manager - Data Engineering: Job Summary: We are seeking a Lead GCP Data Engineer with experience in data modeling and building data pipelines. The ideal candidate should have hands-on experience with GCP services such as Composer, GCS, GBQ, Dataflow, Dataproc, and Pub/Sub. Additionally, the candidate should have a proven track record in designing data solutions, covering everything from data integration to end-to-end storage in bigquery. Responsibilities: Collaborate with Client's Data Architect: Work closely with client data architects and technical teams to design and develop customized data solutions that meet business requirements. Design Data Flows: Architect and implement data flows that ensure seamless data movement from source systems to target systems, facilitating real-time or batch data ingestion, processing, and transformation. Data Integration & ETL Processes: Design and manage ETL processes, ensuring the efficient integration of diverse data sources and high-quality data pipelines. Build Data Products in GBQ: Work on building data products using Google BigQuery (GBQ), designing data models and ensuring data is structured and optimized for analysis. Stakeholder Interaction: Regularly engage with business stakeholders to gather data requirements and translate them into technical specifications, building solutions that align with business needs. Ensure Data Quality & Security: Implement best practices in data governance, security, and compliance for both storage and processing of sensitive data. Continuous Improvement: Evaluate and recommend new technologies and tools to improve data architecture, performance, and scalability. Skills: 6+ years of development experience 4+ years of experience with SQL, Python 2+ GCP BigQuery, DataFlow, GCS, Postgres 3+ years of experience building out data pipelines from scratch in a highly distributed and fault-tolerant manner. Experience with CloudSQL, Cloud Functions and Pub/Sub, Cloud Composer etc., Familiarity with big data and machine learning tools and platforms. Comfortable with open source technologies including Apache Spark, Hadoop, Kafka. Comfortable with a broad array of relational and non-relational databases. Proven track record of building applications in a data-focused role (Cloud and Traditional Data Warehouse) Current or previous experience leading a team. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Posted 2 months ago
5 - 10 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using PySpark. Your typical day will involve collaborating with cross-functional teams, developing and deploying PySpark applications, and acting as the primary point of contact for the project. Roles & Responsibilities: Lead the effort to design, build, and configure PySpark applications, collaborating with cross-functional teams to ensure project success. Develop and deploy PySpark applications, ensuring adherence to best practices and standards. Act as the primary point of contact for the project, communicating effectively with stakeholders and providing regular updates on project progress. Provide technical guidance and mentorship to junior team members, ensuring their continued growth and development. Stay updated with the latest advancements in PySpark and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience with Hadoop, Hive, and other Big Data technologies. Solid understanding of software development principles and best practices. Experience with Agile development methodologies. Strong problem-solving and analytical skills. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 2 months ago
5 - 7 years
4 - 8 Lacs
Chennai
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As a Software Development Engineer, you will be responsible for analyzing, designing, coding, and testing multiple components of application code using PySpark. Your typical day will involve performing maintenance, enhancements, and/or development work for one or more clients in Chennai. Roles & Responsibilities: Design, develop, and maintain PySpark applications for one or more clients. Analyze and troubleshoot complex issues in PySpark applications and provide solutions. Collaborate with cross-functional teams to ensure timely delivery of high-quality software solutions. Participate in code reviews and ensure adherence to coding standards and best practices. Stay updated with the latest advancements in PySpark and related technologies. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience in Big Data technologies such as Hadoop, Hive, and HBase. Experience in designing and developing distributed systems using PySpark. Strong understanding of data structures, algorithms, and software design principles. Experience in working with SQL and NoSQL databases. Experience in working with version control systems such as Git. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality software solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 2 months ago
5 - 10 years
4 - 8 Lacs
Chennai
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As a Software Development Engineer, you will be responsible for analyzing, designing, coding, and testing multiple components of application code using PySpark. Your typical day will involve performing maintenance, enhancements, and/or development work for one or more clients in Chennai. Roles & Responsibilities: Design, develop, and maintain PySpark applications for one or more clients. Analyze and troubleshoot complex issues in PySpark applications and provide solutions. Collaborate with cross-functional teams to ensure timely delivery of high-quality software solutions. Participate in code reviews and ensure adherence to coding standards and best practices. Stay updated with the latest advancements in PySpark and related technologies. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience in Big Data technologies such as Hadoop, Hive, and HBase. Experience in designing and developing distributed systems using PySpark. Strong understanding of data structures, algorithms, and software design principles. Experience in working with SQL and NoSQL databases. Experience in working with version control systems such as Git. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality software solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 2 months ago
8 - 13 years
30 - 35 Lacs
Mumbai
Work from Office
Paramatrix Technologies Pvt Ltd is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 2 months ago
4 - 9 years
6 - 11 Lacs
Bengaluru
Work from Office
About The Role Basic Qualifications : 4+ years of experience in data processing & software engineering and can build highquality, scalable data-oriented products. Experience on distributed data technologies (e.g. Hadoop, MapReduce, Spark, EMR, etc..) for building efficient, large-scale data pipelines. Strong Software Engineering experience with in-depth understanding of Python, Scala, Java or equivalent. Strong understanding of data architecture, modeling and infrastructure. Experience with building workflows (ETL pipelines). Experience with SQL and optimizing queries. Problem solver with attention to detail who can see complex problems in the data space through end to end. Willingness to work in a fast-paced environment. MS/BS in Computer Science or relevant industry experience. Preferred Qualifications : Experience building scalable applications on the Cloud (Amazon AWS, Google Cloud, etc.). Experience building stream-processing applications (Spark streaming, Apache-Flink, Kafka, etc.)
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Dadra and Nagar Haveli, Chandigarh
Work from Office
Data engineer Skills Required: Strong proficiency in Pyspark , Scala , and Python Experience with AWS Glue Experience Required: Minimum 5 years of relevant experience Location: Available across all UST locations Notice Period: Immediate joiners (Candidates available to join by 31st January 2025 ) SO - 22978624 Location - Chandigarh,Dadra & Nagar Haveli,Daman,Diu,Goa,Haveli,Jammu,Lakshadweep,Nagar,New Delhi,Puducherry,Sikkim
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Hyderabad
Work from Office
JR REQ---Data Engineer(Pyspark, Big mailto:data)--4to8year---hyd----hemanth.karanam@tcs.com----TCS C2H ---900000
Posted 2 months ago
7 - 12 years
37 - 45 Lacs
Bengaluru
Work from Office
Authorize.net makes it simple to accept electronic and credit card payments in person, online or over the phone. We ve been working with merchants and small businesses since 1996. As a leading payment gateway, Authorize.net is trusted by more than 445,000 merchants, handling more than 1 billion transactions and USD 149 billion in payments every year. As a Staff Software Engineer at Authorize.net (a Visa solution), you will be a hands-on technical leader and guide the development of new features by translating business problems into technical solutions that resonate with our merchants and partners. You will also drive cross-team features that standardize our approach to API User Experience development and data schemas, ensuring consistent implementation of best practices across the team. Beyond features, you will also work on modernization, working across multiple teams to modernize our systems and deliver innovative online payment solutions. You will be working on containerizing applications, splitting monolithic codebases into microservices, and migrating on-premises workloads to the cloud. In addition, you will enable process improvements through robust DevOps practices, incorporating comprehensive release management strategies and optimized CI/CD pipelines. Collaborating with product managers, tech leads, and engineering teams, you will follow technology roadmaps, Architecture best practices, communicate status, and mentor engineers in technical approaches. This position requires a solid track record of delivering scalable, reliable, and secure software solutions. While we prefer C# expertise, knowledge of other modern programming languages is also welcome. This is a hybrid position. Expectation of days in office will be confirmed by your hiring manager. Basic Qualifications -7+ years of relevant work experience with a Bachelor s Degree or with an Advanced degree -Mastery in one of Frontend or Backend or Full stack is required -Proficiency in one or more programming language or technology including, but not limited, to C#, Java, .NET, JavaScript, CSS, React, building RESTful APIs -Experience in using various GenAI tools in SDLC and optimization. -Familiarity with continuous delivery and DevOps practices, including infrastructure automation, monitoring, logging, auditing, and security -Understanding of integration patterns, API design, and schema standardization for enterprise systems -Hands-on knowledge of Microservices, containers, cloud platforms (e.g., AWS, Azure, or GCP). GenAI is a plus -Prior exposure to SQL &/or NoSQL data stores (e.g., HBase, Cassandra) is beneficial -Experience with merchant data or payment technology is a plus -Excellent communication skills, with a proven ability to mentor and guide engineering teams
Posted 2 months ago
2 - 7 years
25 - 27 Lacs
Bengaluru
Work from Office
As a Software Engineer II - Java Full Stack Developer + Big Data at JPMorgan Chase within the Commercial Bank Technology Team, you will dive head-first into creating innovative solutions that advance businesses and careers. You ll join an inspiring and curious team of technologists dedicated to develop core business platform involving design, analytics, development, coding, testing and application programming that goes into creating high quality software and new products. You ll be working with and sharing ideas, information and innovation with our global team of technologists from all over the world. Job Responsibilities Govern Architecture and Design Quality of Service in production environments Perform cross impact analysis with systems, products, third party technology business stakeholders to ensure systemic changes are handled seamlessly Partner with firm wide technology architecture councils, working groups with the intent of defining, driving apt software development practices in product operating model. Enhance modern infrastructure, operations and advancement in stable and matured technology platforms Work with requirements which may not be available to the last level of detail Keep the team and other key stakeholders up to speed on the progress of what s being developed Work independently with very little supervision Document and deliver crisp communication in various artifacts as needed in product roadmap and execution to technology, product and business stakeholders. Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 2+ years applied experience Advanced knowledge of application, data and infrastructure architecture disciplines Extensive AWS Cloud Foundry experience - transactional and analytical workloads Experience in one or more languages such as Java/Python in a large enterprise and understanding of the importance of end-to-end software development-such as Agile frameworks-is key. Expertise in modernization using micro-front end patterns, micro-services APIs, Event Driven Architecture, Containerization/Kubernetes, and cloud databases Operational experience with multiple applications running in Kubernetes clusters Familiar with security business risks, troubleshooting and root cause analysis Experience in domain driven design, lean architecture and agile delivery Hands on in CI/CD and developer tool chain and Good data analytical skills. Experience in proven design patterns on frameworks, integration, resiliency, security, cost, high availability and scalability Preferred qualifications, capabilities, and skills Hands on experience in building enterprise wide platforms or cloud PaaS/SaaS services is an advantage Experience in a Big Data technology (Hadoop and Spark Architecture, Performance tuning, Spark SQL, HIVE, SQOOP, KAFKA, Impala, HBASE, Entitlements etc., )
Posted 2 months ago
3 - 8 years
8 - 18 Lacs
Navi Mumbai, Mumbai, Delhi
Work from Office
Kindly provide some cv s for Hadoop administrator: JD: Minimum 2 + years of experience in Hadoop Eco system 1. Good knowledge of Big Data /CDP architecture 2. Knowledge on Kafka replication and troubleshooting 3. Must have knowledge of setting HDFS, HIVE KAFKA and its troubleshooting 3. Must have knowledge of setting kerberos, ranger , Hbase 4. Must have good knowledge of Performance tuning of hadoop eco system. 5. Must have good knowledge of Hadoop Eco system up-gradation 6. Installation and configuration of MYSQL on Linux, Windows. 7. Understanding Hadoop technology HA and recovery 8. Ability to multi-task and context-switch effectively between different activities and teams 9. Provide 24x7 support for critical production systems. 10. Excellent written and verbal communication. 11. Ability to organize and plan work independently. 12. Ability to automate day-to-day tasks with Unix shell scripting.
Posted 2 months ago
5 - 10 years
10 - 14 Lacs
Bengaluru
Work from Office
Java Developer with Python and Big Data at N Consulting Ltd We are looking for Java developers with the following skills for Bangalore Location. strong Java developers (read and debug code) and scripting (python or perl programming) experts. good to have skills would be big data pipelines, spark , Hadoop, HBase Candidates should have experience in debugging skills The candidates should have minimum of 5+ yrs experience Nicer to hire strong Java developers (read and debug code) and scripting (python or perl programming) experts, and good to have skills would be big data pipelines, spark , Hadoop, HBase etc., Experience : 5-10 yrs Notice Period : 0-60 days
Posted 2 months ago
5 - 9 years
7 - 11 Lacs
Pune, Hinjewadi
Work from Office
Software Requirements: Proficient in Java (version 1.8 or higher), with a solid understanding of object-oriented programming and design patterns. Experience with Big Data technologies including Hadoop, Spark, Hive, HBase, and Kafka. Strong knowledge of SQL and NoSQL databases with experience in Oracle preferred. Familiarity with data processing frameworks and standards, such as JSON, Avro, and Parquet. Proficiency in Linux shell scripting and basic Unix OS knowledge. Experience with code versioning tools, such as Git, and project management tools like JIRA. Familiarity with CI/CD tools such as Jenkins or Team City, and build tools like Maven. Overall Responsibilities: Translate application storyboards and use cases into functional applications while ensuring high performance and responsiveness. Design, build, and maintain efficient, reusable, and reliable Java code. Develop high-performance and low-latency components to run Spark clusters and support Big Data platforms. Identify and resolve bottlenecks and bugs, proposing best practices and standards. Collaborate with global teams to ensure project alignment and successful execution. Perform testing of software prototypes and facilitate their transfer to operational teams. Conduct analysis of large data sets to derive actionable insights and contribute to advanced analytical model building. Mentor junior developers and assist in design solution strategies. Category-wise Technical Skills: Core Development Skills: Strong Core Java and multithreading experience. Knowledge of concurrency patterns and scalable application design principles. Big Data Technologies: Proficiency in Hadoop ecosystem components (HDFS, Hive, HBase, Apache Spark). Experience in building self-service platform-agnostic data access APIs. Analytical Skills: Demonstrated ability to analyze large data sets and derive insights. Strong systems analysis, design, and architecture fundamentals. Testing and CI/CD: Experience in unit testing and SDLC activities. Familiarity with Agile/Scrum methodologies for project management. Experience: 5 to 9 years of experience in software development, with a strong emphasis on Java and Big Data technologies. Proven experience in performance tuning and troubleshooting applications in a Big Data environment. Experience working in a collaborative global team setting. Day-to-Day Activity: Collaborate with cross-functional teams to understand and translate functional requirements into technical designs. Write and maintain high-quality, performant code while enforcing coding standards. Conduct regular code reviews and provide constructive feedback to team members. Monitor application performance and address issues proactively. Engage in daily stand-ups and sprint planning sessions to ensure alignment with team goals. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or related field. Relevant certifications in Big Data technologies or Java development are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration skills, with the ability to work effectively in a team. Ability to mentor others and share knowledge within the team. Strong organizational skills and attention to detail, with the ability to manage multiple priorities.
Posted 2 months ago
2 - 5 years
14 - 17 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 2 months ago
2 - 4 years
3 - 6 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a Production Support Engineer to analyze and resolve runtime issues, collaborate with developers for complex problem resolution, and automate manual tasks through scripting. The ideal candidate will have experience in monitoring production environments, troubleshooting, and scripting automation for reporting and maintenance. Key Responsibilities: Analyze runtime issues, diagnose problems, and implement code fixes for low to medium complexity. Collaborate with developers to identify and resolve more complex issues. Address urgent issues efficiently while adhering to customer SLAs. Adapt and modify installers, shell scripts, and Perl scripts, automating repetitive tasks. Develop automation scripts for reporting, maintenance, and anomaly detection where applicable. Gather and relay user feedback to the development team. Maintain a record of problem analysis and resolution activity in an on-call tracking system. Proactively monitor production and non-production environments, ensuring stability and fixing issues. Independently identify and resolve issues before they impact production. Develop and enhance smaller system components as needed. Desired Skills Qualifications: Strong SQL skills with experience in MySQL and query writing. Proficiency in automation scripting using Python, Perl, Shell Scripting (advanced tools preferred). Prior experience in production support. Hands-on programming experience, preferably in Python. Strong written and oral communication skills. Familiarity with Storm, Zookeeper, Kafka, ElasticSearch, Aerospike, Redis, HBase, Aesop, MySQL.
Posted 3 months ago
14 - 22 years
45 - 75 Lacs
Bengaluru
Remote
Architecture design, total solution design from requirements analysis, design and engineering for data ingestion, pipeline, data preparation & orchestration, applying the right ML algorithms on the data stream and predictions. Responsibilities: Defining, designing and delivering ML architecture patterns operable in native and hybrid cloud architectures. Research, analyze, recommend and select technical approaches to address challenging development and data integration problems related to ML Model training and deployment in Enterprise Applications. Perform research activities to identify emerging technologies and trends that may affect the Data Science/ ML life-cycle management in enterprise application portfolio. Implementing the solution using the AI orchestration Requirements: Hands-on programming and architecture capabilities in Python, Java, Minimum 6+ years of Experience in Enterprise applications development (Java, . Net) Experience in implementing and deploying Experience in building Data Pipeline, Data cleaning, Feature Engineering, Feature Store Experience in Data Platforms like Databricks, Snowflake, AWS/Azure/GCP Cloud and Data services Machine Learning solutions (using various models, such as Linear/Logistic Regression, Support Vector Machines, (Deep) Neural Networks, Hidden Markov Models, Conditional Random Fields, Topic Modeling, Game Theory, Mechanism Design, etc. ) Strong hands-on experience with statistical packages and ML libraries (e. g. R, Python scikit learn, Spark MLlib, etc. ) Experience in effective data exploration and visualization (e. g. Excel, Power BI, Tableau, Qlik, etc. ) Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc. ) Hands on experience in RDBMS, NoSQL, big data stores like: Elastic, Cassandra, Hbase, Hive, HDFS Work experience as Solution Architect/Software Architect/Technical Lead roles Experience with open-source software. Excellent problem-solving skills and ability to break down complexity. Ability to see multiple solutions to problems and choose the right one for the situation. Excellent written and oral communication skills. Demonstrated technical expertise around architecting solutions around AI, ML, deep learning and related technologies. Developing AI/ML models in real-world environments and integrating AI/ML using Cloud native or hybrid technologies into large-scale enterprise applications. In-depth experience in AI/ML and Data analytics services offered on Amazon Web Services and/or Microsoft Azure cloud solution and their interdependencies. Specializes in at least one of the AI/ML stack (Frameworks and tools like MxNET and Tensorflow, ML platform such as Amazon SageMaker for data scientists, API-driven AI Services like Amazon Lex, Amazon Polly, Amazon Transcribe, Amazon Comprehend, and Amazon Rekognition to quickly add intelligence to applications with a simple API call). Demonstrated experience developing best practices and recommendations around tools/technologies for ML life-cycle capabilities such as Data collection, Data preparation, Feature Engineering, Model Management, MLOps, Model Deployment approaches and Model monitoring and tuning. Back end: LLM APIs and hosting, both proprietary and open-source solutions, cloud providers, ML infrastructure Orchestration: Workflow management such as LangChain, Llamalndex, HuggingFace, OLLAMA Data Management : LLM cache Monitoring: LLM Ops tool Tools & Techniques: prompt engineering, embedding models, vector DB, validation frameworks, annotation tools, transfer learnings and others Pipelines: Gen AI pipelines and implementation on cloud platforms (preference: Azure data bricks, Docker Container, Nginx, Jenkins)
Posted 3 months ago
5 - 9 years
8 - 12 Lacs
Chennai
Work from Office
Job Summary Core technical skills in Big Data (HDFS, Hive, Spark, HDP/CDP, ETL pipeline, SQL, Ranger, Python), Cloud (either AWS or Azure preferably both) services (S3/ADLS, Delta Lake, KeyVault, Hashicorp, Splunk), DevOps, preferably Data Quality & Governance Knowledge, preferably hands-on experience in tools such DataIku/Dremio or any similar tools or knowledge on any such tools Should be able to lead project and report timely status Should ensure smooth release management Key Responsibilities Strategy Responsibilities include development, testing and support required for the project Business IT-Projects-CPBB Data Technlgy Processes As per SCB Governance People & Talent Applicable to SCB Guidelines Risk Management Applicable to SCB standards Regulatory & Business Conduct Display exemplary conduct and live by the Groups Values and Code of Conduct Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct Lead to achieve the outcomes set out in the Banks Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment ] Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters Key stakeholders Athena Program Other Responsibilities Analysis, Development, Testing and Support , Leading the team, Release management Skills And Experience Hadoop SQL HDFS Spark Hive Python ETL Process ADO & Confluence Dremio Qualifications Hadoop,HDFS,HBASE,Spark,Scala, ADO & Confluence,ETL Process, SQL(Expert), Dremio(Entry),Dataiku(Entry) About Standard Chartered We're an international bank, nimble enough to act, big enough for impact For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours When you work with us, you'll see how we value difference and advocate inclusion Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum Flexible working options based around home and office locations, with flexible working patterns Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies everyone feels respected and can realise their full potential
Posted 3 months ago
8 - 13 years
10 - 15 Lacs
Chennai
Work from Office
Overall Responsibilities: Translate application storyboards and use cases into functional applications. Design, build, and maintain efficient, reusable, and reliable Java code. Ensure the best possible performance, quality, and responsiveness of applications. Identify bottlenecks and bugs, and devise solutions to these problems. Develop high-performance and low-latency components to run Spark clusters. Interpret functional requirements into design approaches that can be served through the Big Data platform. Collaborate and partner with global teams based across different locations. Propose best practices and standards; handover to operations. Perform testing of software prototypes and transfer to the operational team. Process data using Hive, Impala, and HBASE. Perform analysis of large data sets and derive insights. Technical Skills (Category-wise): Java Development: Solid understanding of object-oriented programming and design patterns. Strong Java experience with Java 1.8 or higher version. Strong core Java & multithreading working experience. Understanding of concurrency patterns & multithreading in Java. Proficient understanding of code versioning tools, such as Git. Familiarity with build tools such as Maven and continuous integration like Jenkins/Team City. Big Data Technologies: Experience in Big Data technologies like HDFS, Hive, HBASE, Apache Spark, and Kafka. Experience in building self-service platform-agnostic data access APIs. Service-oriented architecture, and data standards like JSON, Avro, Parquet. Experience in building advanced analytical models based on business context. Data Processing: Comfortable working with large data volumes and understanding logical data structures and analysis techniques. Processing data using Hive, Impala, and HBASE. Strong systems analysis, design, and architecture fundamentals, unit testing, and other SDLC activities. Application performance tuning, troubleshooting experience, and implementation of these skills in the Big Data domain. Additional Skills: Experience in working on Linux shell scripting. Experience in RDMS and NoSQL databases. Basic Unix OS and scripting knowledge. Optional:Familiarity with Arcadia Tool for Analytics. Optional:Familiarity with cloud and container technologies. Experience: 8+ years of relevant experience in Java and Big Data technologies. Day-to-Day Activities: Develop and maintain Java code for Big Data applications. Process and analyze large data sets using Big Data technologies. Collaborate with global teams to design and implement solutions. Perform testing and transfer software prototypes to the operational team. Troubleshoot and resolve performance issues and bugs. Ensure adherence to best practices and standards in development. Qualification: Bachelors or Masters degree in Computer Science, Information Technology, or a related field, or equivalent experience. Soft Skills: Excellent communication and collaboration abilities. Strong interpersonal and teamwork skills. Ability to work under pressure and meet tight deadlines. Positive attitude and strong work ethic. Commitment to continuous learning and professional development. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 3 months ago
5 - 10 years
15 - 25 Lacs
Delhi NCR, Bengaluru, Hyderabad
Hybrid
Big Data Engineer JD With a startup spirit and 115,000+ curious and courageous minds, we have the expertise to go deep with the worlds biggest brandsand we have fun doing it. We dream in digital, dare in reality, and reinvent the ways companies work to make an impact far bigger than just our bottom line. Were harnessing the power of technology and humanity to create meaningful transformation that moves us forward in our pursuit of a world that works better for people. Now, were calling upon the thinkers and doers, those with a natural curiosity and a hunger to keep learning, keep growing., People who thrive on fearlessly experimenting, seizing opportunities, and pushing boundaries to turn our vision into reality. And as you help us create a better world, we will help you build your own intellectual firepower. Welcome to the relentless pursuit of better. Inviting applications for the role of Principal Consultant Big Data Engineer! The candidate would be hands on Hadoop/Spark/Pyspark developer, actively participating in Agile Process, keen on learning new technologies, setting high standards for himself and other in the team. The candidate will have excellent technical writing and communication skills. Responsibilities Work with multiple business teams to fully understand business requirements and translate them into data structures. Build and maintain conceptual/logical/physical data models. Develop technical standards, procedures, and guidelines ¢ Create and maintain technical documentation, architecture designs and data flow diagrams. ¢ Constantly improving SDLC process, actively participating in adopting best industry practices ¢ Interface with business professionals, application developers and technical staff working in an agile process and environment ¢ Document success criteria and monitor solution effectiveness, including system performance, adoption and other key metrics ¢ 10+ years of experience developing enterprise-grade data integration solution ¢ Good understanding of Agile process, continuous delivery and best SDLC practices ¢ Strong understanding of Big Data stack including Pyspark, Spark, Hive, HBase, Hadoop, Kafka, ¢ Strong Python knowledge, Good to have Scala programing. ¢ Must have exp on Panda, ETL , Data warehouseing. ¢ Experience implementing micro-services ¢ Experience implementing Cloud-based services ¢ Good understanding of distributed systems design and implementation ¢ Ability to operate effectively in ambiguous situations. ¢ Ability to learn quickly, manage work independently and is a team player Minimum Qualifications Bachelors Degree, Master in Computer Science, BTech, BE or equivalent Strong SQL and Relational and no relational data base knowledge Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter , Facebook , LinkedIn , and YouTube . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 3 months ago
5 - 7 years
4 - 8 Lacs
Chennai
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As a Software Development Engineer, you will be responsible for analyzing, designing, coding, and testing multiple components of application code using PySpark. Your typical day will involve performing maintenance, enhancements, and/or development work for one or more clients in Chennai. Roles & Responsibilities: Design, develop, and maintain PySpark applications for one or more clients. Analyze and troubleshoot complex issues in PySpark applications and provide solutions. Collaborate with cross-functional teams to ensure timely delivery of high-quality software solutions. Participate in code reviews and ensure adherence to coding standards and best practices. Stay updated with the latest advancements in PySpark and related technologies. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience in Big Data technologies such as Hadoop, Hive, and HBase. Experience in designing and developing distributed systems using PySpark. Strong understanding of data structures, algorithms, and software design principles. Experience in working with SQL and NoSQL databases. Experience in working with version control systems such as Git. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality software solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualifications Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 3 months ago
5 - 8 years
7 - 10 Lacs
Bengaluru
Work from Office
Job Title: Java Developer About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Work location : Bangalore JD as below - 5-8 Years Only 30 days Must Have: basic data structure, algorithms, and problem-solving knowledge Java programming, Spring Framework, and Springboot Functional programming: Scala/Python Big data: Hadoop (Spark and Hive) Any RDBMS (preferably MySQL) Hands-on multithreading and distributed application development Good to have: UI: (Angular/React JS) No SQL DB (Hbase, MongoDB, or Cassandra) Basic understanding of system design Any CI/CD tool Distributed messaging (kafka) cache system (redis or memcache) WHY JOIN CAPCO? You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture
Posted 3 months ago
2 - 7 years
27 - 31 Lacs
Bengaluru
Work from Office
Visa is seeking a Data Engineer in the Data Platform department to act as one of key technology leaders to build and manage Visa s technology assets in the Platform as a Service organisation. As a Data Engineer, you will work on AI Platforms Enhancements. You will have the opportunity to work on big data open source tech stack. Bachelors degree, OR 2+ years of relevant work experience Preferred Qualifications Associate: 2 or more years of work experience With 1+ years of software development experience Proficiency in engineering practices and writing high quality code, with expertise in either one of Java, Scala or Python Experience in building platforms on top of Big Data open source platforms like Hadoop/Hive/Spark/Tez. Contributions to one or more Big Data Open Source Technologies like Hadoop / Hive / Spark / Tez / Kafka / HBase etc will be an added plus.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
HBase is a distributed, scalable, and NoSQL database that is commonly used in big data applications. As the demand for big data solutions continues to grow, so does the demand for professionals with HBase skills in India. Job seekers looking to explore opportunities in this field can find a variety of roles across different industries and sectors.
These cities are known for their strong presence in the IT industry and are actively hiring professionals with HBase skills.
The salary range for HBase professionals in India can vary based on experience and location. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.
In the HBase domain, a typical career progression may look like: - Junior HBase Developer - HBase Developer - Senior HBase Developer - HBase Architect - HBase Administrator - HBase Consultant - HBase Team Lead
In addition to HBase expertise, professionals in this field are often expected to have knowledge of: - Apache Hadoop - Apache Spark - Data Modeling - Java programming - Database design - Linux/Unix
As you prepare for HBase job opportunities in India, make sure to brush up on your technical skills, practice coding exercises, and be ready to showcase your expertise in interviews. With the right preparation and confidence, you can land a rewarding career in the exciting field of HBase. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2