Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Responsibilities: Work with product management and dev team to design, develop and deliver features and enhancements Collaborate closely with peers to develop clean code: readable, testable, high quality, performant, and secure Develop code using pair and team programming approaches Perform peer code reviews and walk-throughs Automate testing and deployment of software to enable delivering improvements to customers on a regular cadence Work closely with the agile team to innovate and improve everything Minimum Requirements: B.S. in Computer Science or equivalent is preferred 4+ years of experience with modern languages such as Java/C#/JavaScript/Scala Recent 2+ years of Scala functional programming in an Enterprise SW environment Experience with RESTful applications Experience with Microservices Ability to work effectively in adistributed, collaborative, agile environment and deliver solutions on a regular cadence
Posted 2 weeks ago
5.0 - 6.0 years
9 - 14 Lacs
Noida
Work from Office
Solid understanding of object-oriented programming and design patterns. 5 to 6 Years of strong experience with bigdata. Comfortable working with large data volumes and able to demonstrate a firm understanding of logical data structures and analysis techniques. Experience in Big data technologies like HDFS, Hive, HBase, Apache Spark, Pyspark & Kafka Proficient in code versioning tools, such as Git, BitBucket, and Jira Strong systems analysis, design and architecture fundamentals, Unit Testing, and other SDLC activities Experience in working on Linux shell scripting. Demonstrated analytical and problem-solving skills. Excellent troubleshooting and debugging skills. Strong communication and aptitude. Ability to write reliable, manageable, and high-performance code. Good knowledge of database principles, practices, and structures, including SQL development experience, preferably with Oracle. Understanding fundamental design principles behind a scalable application. Basic Unix OS and scripting knowledge. Good to have: Financial markets background is preferable but is not a must. Experience in Jenkins, Scala, Autosys. Familiarity with build tools such as Maven and continuous integration. Candidates with working knowledge of Docker Kubernetes OpenShift Mesos is a plus. Have basic experience in Data Preparation Tools Experience with CI/CD build pipelines. Mandatory Competencies Big Data - Big Data - HDFS Big Data - Big Data - HIVE Big Data - Big Data - Hadoop Big Data - Big Data - Pyspark Beh - Communication Data Science and Machine Learning - Data Science and Machine Learning - Apache Spark
Posted 2 weeks ago
4.0 - 8.0 years
20 - 25 Lacs
Noida
Work from Office
Key Responsibilities Senior professional level role and hands-on enterprise level architect/ solution leader with deep experience in Data Engineering technologies and on public cloud like AWS Azure/ GCP Engage with client managers to understand their current state, business problems/ opportunities, conceptualize solution options, discuss and finalize with client stakeholders, help bootstrap a team and deliver PoCs/PoTs/MVP etc. Help build overall competency within teams working in related client engagements and rest of Iris in Data & Analytics, including Data Engineering, Analytics, Data Science, AI/ML, ML and Data Ops, Data Governance etc. related solution patterns, platforms, tools and technology. Staying up to date in the field regarding best practices, new and emerging tools, and trends in the Data and Analytics Focus on building practice competencies on Data & Analytics Professional Experience Qualifications Bachelors degree Masters degree in a Software discipline Experience w.r.t. Data architecture, Implementation of large scale Enterprise-level Data Lake/Data Warehousing, Big Data and Analytics applications. Professional with a background in Data Engineering, should have led multiple engagements in Data Engineering in terms of solutioning, architecture and delivery. Excellent English communication both written and verbal Technology o For the above skill areas, must have lifecycle experience on some of the tools such as AWS Glue Redshift Azure Data lake Databricks Snowflake , etc. o Database experience and programming experience on Spark- Spark SQL, PySpark, Python etc.
Posted 2 weeks ago
4.0 - 6.0 years
9 - 13 Lacs
Chennai
Work from Office
As a Senior ML Engineer / ML Ops - GCP Specialist at Incedo, you will be responsible for building and deploying machine learning models in production environments. You will work with data scientists, data engineers, and software developers to design and implement machine learning pipelines. You will be skilled in programming languages such as Python or Java and have experience in ML tools such as TensorFlow or PyTorch. You will be responsible for ensuring that models are scalable, efficient, and can be deployed in production environments. Roles & Responsibilities: Designing and developing machine learning (ML) models and algorithms Implementing and maintaining ML pipelines and workflows Collaborating with data scientists and analysts to deploy and monitor ML models Developing and implementing ML deployment and monitoring strategies Providing guidance and mentorship to junior ML engineers Troubleshooting and resolving ML platform issues Technical Skills Skills Requirements: Proficiency in programming languages such as Python, Java, or Scala. Understanding of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with deep learning frameworks such as TensorFlow, PyTorch, or Keras. Knowledge of cloud computing and containerization technologies such as Docker, Kubernetes, or AWS. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 weeks ago
7.0 - 9.0 years
6 - 10 Lacs
Chennai
Work from Office
As a Technical Lead - Cloud Data Platform (AWS) at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Nice-to-have skills Qualifications Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 weeks ago
6.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction About Us Instana delivers cutting-edge Cloud Observability and Application Performance Monitoring (APM) solutions. Our SaaS multi-cloud platform processes and analyzes millions of spans, traces, metrics and events per second to provide actionable insights to our fast-growing customer base. Your Role And Responsibilities Role Overview Join us as a Software Developer on our team to design, develop, and maintain high load, big-data JVM applications. Collaborate with a cross-functional team to ensure efficient and reliable product delivery. Key Responsibilities Design, develop and maintain scalable, high-performance JVM applications. Translate business requirements into technical solutions. Write clean, efficient, and well-tested code. Conduct functional, unit and integration testing to ensure high-quality delivery. Optimize application performance and scalability. Troubleshoot and resolve production issues. Stay updated with the latest Java and backend trends. Qualifications Required technical and professional expertise 6+ years of hands-on experience in development. Solid understanding of OOP principles and design patterns. Proficiency in cloud-native platforms (AWS, GCP, Azure). Practical experience in developing for Kubernetes. Experience with big-data processing and analytics: Kafka, Click house is a plus. Strong problem-solving and analytical skills. Effective communication and teamwork abilities. Preferred Skills Preferred technical and professional experience Experience in high-load data processing and distributed systems. Knowledge of microservices architecture. Familiarity with DevOps tools and practices (TDD, CI/CD, SCM). Hands-on experience with cloud observability and APM tools. Proficiency in RDBMS technologies (JDBC, SQL). Familiarity with NoSQL datastores (Elastic, Cassandra, S3) Expertise in Java web frameworks (Spring Boot, Quarkus, Dropwizard). Test-driven development (TDD) using JUnit or similar frameworks. Modern JVM languages: Kotlin, Scala, Clojure Modern Java backend frameworks and libs: Reactor, Kafka Streams, JOOQ, cloud SDKs, Serverless
Posted 2 weeks ago
2.0 years
0 Lacs
Delhi, India
On-site
Job Description Job Description About The Job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives About The Job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives Qualifications Professionals with degrees in Maths, Data Science, Statistics, or related fields involving statistical analysis of large data sets 2-3 years of experience in market research or relevant field Mindset and Approach to work: Embraces change, innovation and iterative processes in order to continuously improve the products value to clients Continuously collaborate & support to improve the product Active interest in arriving at collaboration and consensus in communication plans, deliverables and deadlines Plans and completes assignments independently within an established framework, breaking down complex tasks, making reasonable decisions. Work is reviewed for overall technical soundness. Participates in data experiments and PoCs, setting measurable goals, timelines and reproducible outcomes. Applies critical thinking and takes initiative Continuously reviews the latest industry innovations and effectively applies them to their work Consistently challenges and analyzes data to ensure accuracy Functional Skills: Ability to manipulate, analyze and interpret large data sources Experienced in high-level programming languages (f.e. Python, R, SQL, Scala), as well as with data visualization tools (e.g. Power BI, Spotfire, Tableau, MicroStrategy) Able to work in virtual environment. Familiar with git/Bitbucket processes People with at least some experience in RMS, NIQ, will have an advantage Can use a logical reasoning process to break down and work through increasingly challenging situations or problems to arrive at positive outcomes Identify and use data from various sources to influence decisions Interpret effectively the data in relation to business objectives Soft Skills Ability to engage/communicate with team and extended team members Can adapt to change and new ideas or ways of working Exhibits emotional intelligence when partnering with internal and external stakeholders Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 2 weeks ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Job Description About The Job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives About The Job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives Qualifications Professionals with degrees in Maths, Data Science, Statistics, or related fields involving statistical analysis of large data sets 2-3 years of experience in market research or relevant field Mindset and Approach to work: Embraces change, innovation and iterative processes in order to continuously improve the products value to clients Continuously collaborate & support to improve the product Active interest in arriving at collaboration and consensus in communication plans, deliverables and deadlines Plans and completes assignments independently within an established framework, breaking down complex tasks, making reasonable decisions. Work is reviewed for overall technical soundness. Participates in data experiments and PoCs, setting measurable goals, timelines and reproducible outcomes. Applies critical thinking and takes initiative Continuously reviews the latest industry innovations and effectively applies them to their work Consistently challenges and analyzes data to ensure accuracy Functional Skills: Ability to manipulate, analyze and interpret large data sources Experienced in high-level programming languages (f.e. Python, R, SQL, Scala), as well as with data visualization tools (e.g. Power BI, Spotfire, Tableau, MicroStrategy) Able to work in virtual environment. Familiar with git/Bitbucket processes People with at least some experience in RMS, NIQ, will have an advantage Can use a logical reasoning process to break down and work through increasingly challenging situations or problems to arrive at positive outcomes Identify and use data from various sources to influence decisions Interpret effectively the data in relation to business objectives Soft Skills Ability to engage/communicate with team and extended team members Can adapt to change and new ideas or ways of working Exhibits emotional intelligence when partnering with internal and external stakeholders Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 2 weeks ago
8.0 years
0 Lacs
India
On-site
Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities What you'll do Build and ship features and capabilities daily in highly scalable, cross-geo distributed environment Be part of an amazing open and collaborative work environment with other experienced engineers, architects, product managers, and designers Review code with best practices of readability, testing patterns, documentation, reliability, security, and performance considerations in mind Mentor and level up the skills of your teammates by sharing your expertise in formal and informal knowledge sharing sessions Ensure full visibility, error reporting, and monitoring of high performing backend services Participate in Agile software development including daily stand-ups, sprint planning, team retrospectives, show and tell demo sessions Qualifications Your background 8+ years of experience building and developing backend applications Bachelor's or Master's degree with a preference for Computer Science degree Experience crafting and implementing highly scalable and performant RESTful micro-services Proficiency in any modern object-oriented programming language (e.g., Java, Kotlin, Go, Scala, Python, etc.) Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra) Real passion for collaboration and strong interpersonal and communication skills Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure) Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality Our perks & benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh .
Posted 2 weeks ago
2.0 - 3.0 years
5 - 9 Lacs
Kochi, Coimbatore, Thiruvananthapuram
Work from Office
Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required
Posted 2 weeks ago
11.0 - 15.0 years
45 - 50 Lacs
Bengaluru
Work from Office
Management Level: Ind & Func AI Decision Science Senior Manager Location: Gurgaon,Bangalore Must-have skills: Risk Analytics, Model Development, Validation, and Auditing, Performance Evaluation, Monitoring, Governance, Statistical Techniques:Linear Regression, Logistic Regression, GLM, GBM, XGBoost, Time Series (ARMA/ARIMA), Programming Languages:SAS, R, Python, Spark, Scala, Tools:Tableau, PowerBI, Regulatory Knowledge:Basel/CCAR/DFAST/CECL/IFRS9, Risk Reporting and Dashboard Solutions Good to have skills: Advanced Data Science Techniques, AML, Operational Risk Modelling, Cloud Platform Experience (AWS/Azure/GCP), Machine Learning Interpretability and Bias Algorithms Job Summary We are seeking a highly skilled Ind & Func AI Decision Science Consultant to join the Accenture Strategy & Consulting team in the Global Network Data & AI practice. You will be responsible for risk model development, validation, and auditing activities, ensuring performance evaluation, monitoring, governance, and documentation. This role offers opportunities to work with top financial clients globally, utilizing cutting-edge technologies to drive business capabilities and foster innovation. Roles & Responsibilities: Engagement Execution Lead client engagements that may involve model development, validation, governance, risk strategy, transformation, implementation, and end-to-end delivery of risk management solutions for Accentures clients. Advise clients on a wide range of Credit, Market, and Operational Risk and Management/Analytics initiatives. Projects may involve Risk Management advisory work for CROs, CFOs, etc., to achieve a variety of business, operational, and regulatory outcomes. Be a trusted advisor to senior executives and management on their business needs and issues. Develop and frame a Proof of Concept for key clients, where applicable, including scoping, staffing, engagement setup, and execution. Practice Enablement Mentor, groom, and counsel analysts, consultants, and managers to be successful and effective Management Consultants. Support development of the Risk Analytics Practice by driving initiatives around staffing, quality management, recruitment, capability development, knowledge management, etc. Develop thought capital and disseminate information around current and emerging trends in Financial Risk Management. Contribute to development of Accenture Points-of-View on a variety of risk analytics topics. Publish research and present ideas at industry conferences and seminars. Opportunity Development Identify business development opportunities for our Risk Management offerings in the Banking and Capital Market domains. Develop compelling business case/response to new business opportunities. Work with deal teams to provide subject matter expertise on credit, market, and operational risk-related topics and participate in development of client proposals and RFP responses. Client Relationship Development Develop trusted relationships with internal and external clients and have an eye for qualifying potential opportunities and negotiating complex deals. Build strong relationships with global Accenture Analytics and Risk Management teams, and further develop existing relationships based on mutual benefit and synergies. Professional & Technical Skills: 11-15 years of relevant Risk Analytics experience at one or more Financial Services firms or Professional Services/Risk Advisory with significant exposure to Credit Risk: Risk Ratings, Credit Risk Methodology, PD/LGD/EAD Models, CCAR/DFAST Loss Forecasting, IFRS9/CECL Loss Forecasting across Retail and Commercial portfolios. Market Risk: Stress Testing, Liquidity Risk, Counterparty Credit Risk, PPNR/Revenue/Loss Forecasting, Pricing. Operational Risk: Fraud Risk, Collections and Recovery, Credit Policy and Limit Management, Fraud Risk, Counterparty Credit Risk. Regulatory Knowledge: Basel II/III, Solvency, FRTB, CCAR, IFRS9/CECL, etc. Strong understanding of banking products across retail and wholesale asset classes, and expertise in frameworks and methodologies used in risk analytics for banking portfolios. Expertise in risk strategy design and supporting analytics for banking portfolios. Modeling Techniques: Linear Regression, Logistic Regression, GLM, GBM, XGBoost, CatBoost, Neural Networks, Time Series (ARMA/ARIMA), ML Interpretability and Bias Algorithms. Programming Languages & Tools: SAS, R, Python, Spark, Scala, Tableau, QlikView, PowerBI, SAS VA, Moodys Risk Calc, Bloomberg, Murex, QRM. Additional Information: Masters degree in a quantitative discipline (mathematics, statistics, economics, financial engineering, operations research) or MBA from top-tier universities. Industry Certifications:FRM, PRM, CFA preferred. Excellent Communication and Interpersonal Skills. Willingness to travel up to 40% of the time. Qualification Experience: Minimum 11-15 years of relevant Risk Analytics experience, Exposure to Financial Services firms or Professional Services/Risk Advisory Educational Qualification: Masters degree in a quantitative discipline (mathematics, statistics, economics, financial engineering, operations research) or MBA from top-tier universities, Industry certifications such as FRM, PRM, CFA preferred
Posted 2 weeks ago
2.0 - 7.0 years
0 - 1 Lacs
Pune, Chennai, Bengaluru
Hybrid
Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 5 to Maximum 8 Yrs of exp Location : Chennai / Bangalore / Pune / Mumbai / Hyderabad Mandatory Skills : Big Data | Hadoop | SCALA | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk
Posted 2 weeks ago
4.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 weeks ago
4.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 weeks ago
3.0 - 5.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Job Summary From the newest ideas in cluster computing to the latest web framework, NetApp software products embrace innovation to deliver compelling solutions to our business. Be part of a team, where your ideas can make a difference and where you’ll be part of a collaborative, open-minded cult NetApp is looking for an Engineer to join our BlueXP software and application development team. BlueXP is our unified console and API namespace that offers a seamless experience across all our storage and data solutions. It is a unified control plane that provides global visibility and operational simplicity of storage and data services across on-premises and cloud environments. This is a great opportunity to work on a high-powered team delivering an industry changing product within an extremely high growth sector of the tech industry. Job Requirements Programming skills in NodeJS, Java/Scala /GO lang with understanding of OOPS, as well as scripting languages. (Python/Shell script). Experience with REST APIs. Experience in working on Linux platform. Understanding of concepts related to data structures and operating system fundamentals. Strong aptitude for learning new technologies. Strong verbal and written communication skills, including presentation skills to engage any audience. Creative and analytical approach to problem solving. Programming skills with multi-threading, complex algorithms and problem solving. Familiarity with Docker, Kubernetes and Cloud Technologies. Essential Functions: A major part of your responsibility will be to use up-to-date technologies to complete projects as part of the development cycle including Coding, Design, Development, Debugging and Testing. Participate in technical discussions within the team or other groups for evaluating and executing design and development plans for the product. Education We are seeking candidates that are pursuing master’s degree in computer science, Computer Engineering, Electrical/Electronic Engineering, Information Systems or an equivalent degree with 2-5 Years of experience preferred.
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration and ETL processes. - Experience with cloud-based data solutions and analytics. - Familiarity with programming languages such as Python or Scala. - Ability to work with data visualization tools to present insights. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Hyderabad office. - A 15 years full time education is required., 15 years full time education
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our technology services client is seeking multiple GCP Big Data Analytics to join their team on a contract basis. These positions offer a strong potential for conversion to full-time employment upon completion of the initial contract period. Below are further details about the role: Role: GCP Big Data Analytics Experience: 8+ Years Location: Hyderabad Notice Period: Immediate- 15 Days Mandatory Skills: GCP Storage, GCP BigQuery, GCP DataProc, GCP Cloud Composer, GCP DMS, Apache airflow, Java, Python, Scala, GCP Datastream Job Description: We are seeking a highly skilled and motivated Data Engineer with 5 to 7 years of experience to join our team The ideal candidate will have handson expertise in Google Cloud Platform GCP particularly BigQuery along with strong proficiency in PLSQL and Unix Shell Scripting You will be responsible for designing building and maintaining scalable data pipelines and solutions that support business intelligence and analytics initiatives Key Responsibilities Design and implement robust data pipelines using GCP BigQuery Develop and optimize complex PLSQL queries for data extraction transformation and loading ETL Automate data workflows and system tasks using Unix Shell Scripting Collaborate with crossfunctional teams to understand data requirements and deliver solutions Ensure data quality integrity and security across all data processes Monitor and troubleshoot data pipeline performance and reliability Document technical specifications processes and best practices Mandatory Skills GCP BigQuery Advanced experience in building and optimizing queries managing datasets and integrating with other GCP services PLSQL Strong command over writing stored procedures functions and performance tuning Unix Shell Scripting Proficiency in automating tasks and managing systemlevel operations If you are interested, share the updated resume to ravi.k@s3staff.com
Posted 2 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are looking for a skilled and proactive Machine Learning Applications Engineer with at least 6 years of industry experience and a strong foundation in DevOps practices . The ideal candidate will be responsible for building, deploying, and optimizing ML models in production environments, as well as ensuring robust infrastructure support for AI/ML pipelines. This role sits at the intersection of data science, machine learning engineering, and DevOps. Key Responsibilities: Design, develop, and deploy scalable ML models and applications into production environments. Build and manage end-to-end ML pipelines including data ingestion, model training, evaluation, versioning, deployment, and monitoring. Implement CI/CD pipelines tailored for ML workflows. Collaborate with data scientists, software engineers, and cloud architects to operationalize machine learning solutions. Ensure high availability, reliability, and performance of ML services in production. Monitor and optimize model performance post-deployment. Automate infrastructure provisioning using Infrastructure-as-Code (IaC) tools. Maintain strong documentation of ML systems, experiments, and deployment configurations. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Minimum 6 years of professional experience in software engineering or ML engineering roles. Strong hands-on experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn. Proficiency in Python (and optionally Java, Scala, or Go). Solid experience with DevOps tools such as Docker, Kubernetes, Jenkins, GitLab CI/CD. Experience with cloud platforms like AWS, Azure, or GCP , particularly with AI/ML services and infrastructure. Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK, CloudWatch). Strong understanding of ML Ops practices including model versioning, experiment tracking, and reproducibility. Preferred Qualifications: Experience with Kubeflow , MLflow , SageMaker , or Vertex AI . Familiarity with data engineering tools such as Apache Airflow, Spark, or Kafka. Understanding of data security and compliance best practices in ML deployments. Prior experience in deploying large-scale, low-latency ML applications in production.
Posted 2 weeks ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc, Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
HighRadius is looking for a Cloud & Data Engineer to own our end-to-end data platform: building fault-tolerant ETL pipelines, administering and tuning our OLTP/OLAP databases, and automating all infrastructure as code. You’ll ensure data flows smoothly from ingestion through transformation to runtime analytics. Responsibilities Design, implement and operate ETL workflows (Airflow, AWS Glue) Administer PostgreSQL, MySQL or SQL Server: performance tuning, backups, restores.\ Manage schema migrations via Flyway/Liquibase and version control Provision and maintain data infrastructure with Terraform & Ansible Monitor job health & database metrics; troubleshoot failures and slow queries Automate scaling, snapshots and cost controls for RDS clusters Secure data environment with IAM, network policies and encryption Participate in 24×7 on-call rotations; author runbooks and postmortems Collaborate on data modeling, indexing strategies and query optimization Document all data flows, tuning guides, and runbooks in Confluence Requirements B.Tech/BE in CS, Information Systems or equivalent 4+ years building and operating ETL pipelines 3+ years as DBA in PostgreSQL/MySQL or SQL Server Hands-on with Airflow, Informatica or AWS Glue; strong Python/Java/Scala skills Proven ability to profile and tune complex SQL queries Terraform/Ansible experience for infra automation Familiarity with monitoring tools (Prometheus, Grafana, CloudWatch).
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Data Analyst Location: Bengaluru, Karnataka, India About the Role We're looking for an experienced Data Analyst to join our team, focusing on building and enhancing underwriting products specifically for the Indian market. In this role, you'll be instrumental in developing sophisticated credit assessment frameworks and scoring models by leveraging diverse data sources. If you're passionate about data, have a deep understanding of the Indian financial services landscape, and thrive in a dynamic environment, we encourage you to apply. What You'll Do Be the powerhouse behind scalable and efficient solutions that span across a broad spectrum of fintech sectors—be it lending, insurance, investments. Our work isn't confined to a single domain. We tackle a diverse set of problem statements, from computer vision and tabular data to natural language processing, speech recognition, and even Generative AI. Each day brings a new challenge and a new opportunity for breakthroughs. What You'll Bring Bachelor's or Master's in Engineering or equivalent. 2+ years of Data Science/Machine Learning experience. Strong knowledge in statistics, tree-based techniques (e.g., Random Forests, XGBoost), machine learning (e.g., MLP, SVM), inference, hypothesis testing, simulations, and optimizations. Bonus: Experience with deep learning techniques; experience in working Ad domain/reinforcement learning. Strong Python programming skills and experience in building Data Pipelines in PySpark, along with feature engineering. Proficiency in pandas, scikit-learn, Scala, SQL, and familiarity with TensorFlow/PyTorch. Understanding of DevOps/MLOps, including creating Docker containers and deploying to production (using platforms like Databricks or Kubernetes).
Posted 2 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) Amazon IN Platform Development team is looking to hire a rock star Data/BI Engineer to build for pan Amazon India businesses. Amazon India is at the core of hustle @ Amazon WW today and the team is charted with democratizing data access for the entire marketplace & add productivity. That translates to owning the processing of every Amazon India transaction, for which the team is organized to have dedicated business owners & processes for each focus area. The BI Engineer will play a key role in contributing to the success of each focus area, by partnering with respective business owners and leveraging data to identify areas of improvement & optimization. He / She will build deliverables like business process automation, payment behavior analysis, campaign analysis, fingertip metrics, failure prediction etc. that provide edge to business decision making AND can scale with growth. The role sits in the sweet spot between technology and business worlds AND provides opportunity for growth, high business impact and working with seasoned business leaders. An ideal candidate will be someone with sound technical background in data domain – storage / processing / analytics, has solid business acumen and a strong automation / solution oriented thought process. Will be a self-starter who can start with a business problem and work backwards to conceive & devise best possible solution. Is a great communicator and at ease on partnering with business owners and other internal / external teams. Can explore newer technology options, if need be, and has a high sense of ownership over every deliverable by the team. Is constantly obsessed with customer delight & business impact / end result and ‘gets it done’ in business time. Key job responsibilities - Design, implement and support an data infrastructure for analytics needs of large organization - Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies - Be enthusiastic about building deep domain knowledge about Amazon’s business. - Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. - Enjoy working closely with your peers in a group of very smart and talented engineers. - Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers - Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency About the team India Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by building UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dashboarding d) empowering business with self-service tools for deep dives & insights seeking. Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Knowledge of AWS Infrastructure Knowledge of basics of designing and implementing a data schema like normalization, relational model vs dimensional model Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
HEROIC Cybersecurity ( HEROIC.com ) is seeking a Senior Data Infrastructure Engineer with deep expertise in DataStax Enterprise (DSE) and Apache Cassandra to help architect, scale, and maintain the data infrastructure that powers our cybersecurity intelligence platforms. You will be responsible for designing and managing fully automated, big data pipelines that ingest, process, and serve hundreds of billions of breached and leaked records sourced from the surface, deep, and dark web. You'll work with DSE Cassandra, Solr, and Spark, helping us move toward a 99% automated pipeline for data ingestion, enrichment, deduplication, and indexing — all built for scale, speed, and reliability. This position is critical in ensuring our systems are fast, reliable, and resilient as we ingest thousands of unique datasets daily from global threat intelligence sources. What you will do: Design, deploy, and maintain high-performance Cassandra clusters using DataStax Enterprise (DSE) Architect and optimize automated data pipelines to ingest, clean, enrich, and store billions of records daily Configure and manage DSE Solr and Spark to support search and distributed processing at scale Automate dataset ingestion workflows from unstructured surface, deep, and dark web sources Cluster management, replication strategy, capacity planning, and performance tuning Ensure data integrity, availability, and security across all distributed systems Write and manage ETL processes, scripts, and APIs to support data flow automation Monitor systems for bottlenecks, optimize queries and indexes, and resolve production issues Research and integrate third-party data tools or AI-based enhancements (e.g., smart data parsing, deduplication, ML-based classification) Collaborate with engineering, data science, and product teams to support HEROIC’s AI-powered cybersecurity platform Requirements Minimum 5 years experience with Cassandra / DataStax Enterprise in production environments Hands-on experience with DSE Cassandra, Solr, Apache Spark, CQL, and data modeling at scale Strong understanding of NoSQL architecture, sharding, replication, and high availability Advanced knowledge of Linux/Unix, shell scripting, and automation tools (e.g., Ansible, Terraform) Proficient in at least one programming language: Python, Java, or Scala Experience building large-scale automated data ingestion systems or ETL workflows Solid grasp of AI-enhanced data processing, including smart cleaning, deduplication, and classification Excellent written and spoken English communication skills Prior experience with cybersecurity or dark web data (preferred but not required) Benefits Position Type: Full-time Location: Pune, India (Remote – Work from anywhere) Compensation: Competitive salary based on experience Benefits: Paid Time Off + Public Holidays Professional Growth: Amazing upward mobility in a rapidly expanding company. Innovative Culture: Fast-paced, innovative, and mission-driven. Be part of a team that leverages AI and cutting-edge technologies. About Us: HEROIC Cybersecurity ( HEROIC.com ) is building the future of cybersecurity. Unlike traditional cybersecurity solutions, HEROIC takes a predictive and proactive approach to intelligently secure our users before an attack or threat occurs. Our work environment is fast-paced, challenging and exciting. At HEROIC, you’ll work with a team of passionate, engaged individuals dedicated to intelligently securing the technology of people all over the world. Position Keywords: DataStax Enterprise (DSE), Apache Cassandra, Apache Spark, Apache Solr, AWS, Jira, NoSQL, CQL (Cassandra Query Language), Data Modeling, Data Replication, ETL Pipelines, Data Deduplication, Data Lake, Linux/Unix Administration, Bash, Docker, Kubernetes, CI/CD, Python, Java, Distributed Systems, Cluster Management, Performance Tuning, High Availability, Disaster Recovery, AI-based Automation, Artificial Intelligence, Big Data, Dark Web Data
Posted 2 weeks ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Organisation DataFlow Group is a pioneering global provider of specialized Primary Source Verification (PSV) solutions, and background screening and immigration compliance services that assist public and private organizations in mitigating risks to make informed, cost-effective decisions regarding their Applicants and Registrants. About The Role We are looking for a highly skilled and experienced Senior ETL & Data Streaming Engineer with over 10 years of experience to play a pivotal role in designing, developing, and maintaining our robust data pipelines. The ideal candidate will have deep expertise in both batch ETL processes and real-time data streaming technologies, coupled with extensive hands-on experience with AWS data services. A proven track record of working with Data Lake architectures and traditional Data Warehousing environments is essential. Duties And Responsibilities Design, develop, and implement highly scalable, fault-tolerant, and performant ETL processes using industry-leading ETL tools to extract, transform, and load data from various source systems into our Data Lake and Data Warehouse. Architect and build batch and real-time data streaming solutions using technologies like Talend, Informatica, Apache Kafka or AWS Kinesis to support immediate data ingestion and processing requirements. Utilize and optimize a wide array of AWS data services Collaborate with data architects, data scientists, and business stakeholders to understand data requirements and translate them into efficient data pipeline solutions. Ensure data quality, integrity, and security across all data pipelines and storage solutions. Monitor, troubleshoot, and optimize existing data pipelines for performance, cost-efficiency, and reliability. Develop and maintain comprehensive documentation for all ETL and streaming processes, data flows, and architectural designs. Implement data governance policies and best practices within the Data Lake and Data Warehouse environments. Mentor junior engineers and contribute to fostering a culture of technical excellence and continuous improvement. Stay abreast of emerging technologies and industry best practices in data engineering, ETL, and streaming. Qualifications 10+ years of progressive experience in data engineering, with a strong focus on ETL, ELT and data pipeline development. Deep expertise in ETL Tools : Extensive hands-on experience with commercial ETL tools (Talend) Strong proficiency in Data Streaming Technologies : Proven experience with real-time data ingestion and processing using platforms such as AWS Glue,Apache Kafka, AWS Kinesis, or similar. Extensive AWS Data Services Experience : Proficiency with AWS S3 for data storage and management. Hands-on experience with AWS Glue for ETL orchestration and data cataloging. Familiarity with AWS Lake Formation for building secure data lakes. Good to have experience with AWS EMR for big data processing Data Warehouse (DWH) Knowledge : Strong background in traditional data warehousing concepts, dimensional modeling (Star Schema, Snowflake Schema), and DWH design principles. Programming Languages : Proficient in SQL and at least one scripting language (e.g., Python, Scala) for data manipulation and automation. Database Skills : Strong understanding of relational databases and NoSQL databases. Version Control : Experience with version control systems (e.g., Git). Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. Communication : Strong verbal and written communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. (ref:hirist.tech)
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About NetApp NetApp is the intelligent data infrastructure company, turning a world of disruption into opportunity for every customer. No matter the data type, workload or environment, we help our customers identify and realize new business possibilities. And it all starts with our people. If this sounds like something you want to be part of, NetApp is the place for you. You can help bring new ideas to life, approaching each challenge with fresh eyes. Of course, you won't be doing it alone. At NetApp, we're all about asking for help when we need it, collaborating with others, and partnering across the organization - and beyond. Job Summary From the newest ideas in cluster computing to the latest web framework, NetApp software products embrace innovation to deliver compelling solutions to our business. Be part of a team, where your ideas can make a difference and where you’ll be part of a collaborative, open-minded cult NetApp is looking for an Engineer to join our BlueXP software and application development team. BlueXP is our unified console and API namespace that offers a seamless experience across all our storage and data solutions. It is a unified control plane that provides global visibility and operational simplicity of storage and data services across on-premises and cloud environments. This is a great opportunity to work on a high-powered team delivering an industry changing product within an extremely high growth sector of the tech industry. Job Requirements Programming skills in NodeJS, Java/Scala /GO lang with understanding of OOPS, as well as scripting languages. (Python/Shell script). Experience with REST APIs. Experience in working on Linux platform. Understanding of concepts related to data structures and operating system fundamentals. Strong aptitude for learning new technologies. Strong verbal and written communication skills, including presentation skills to engage any audience. Creative and analytical approach to problem solving. Programming skills with multi-threading, complex algorithms and problem solving. Familiarity with Docker, Kubernetes and Cloud Technologies. Essential Functions: A major part of your responsibility will be to use up-to-date technologies to complete projects as part of the development cycle including Coding, Design, Development, Debugging and Testing. Participate in technical discussions within the team or other groups for evaluating and executing design and development plans for the product. Education We are seeking candidates that are pursuing master’s degree in computer science, Computer Engineering, Electrical/Electronic Engineering, Information Systems or an equivalent degree with 2-5 Years of experience preferred. At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees. This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process. Equal Opportunity Employer NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification. Why NetApp? We are all about helping customers turn challenges into business opportunity. It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better - but also to innovate. We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches. We enable a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations. We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future. We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life. If you want to help us build knowledge and solve big problems, let's talk. Submitting an application To ensure a streamlined and fair hiring process for all candidates, our team only reviews applications submitted through our company website. This practice allows us to track, assess, and respond to applicants efficiently. Emailing our employees, recruiters, or Human Resources personnel directly will not influence your application. Apply
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France