Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
kolkata, west bengal
On-site
Role Overview: You will be responsible for selecting features, building and optimizing classifiers using machine learning techniques, data mining using state-of-the-art methods, enhancing data collection procedures, processing, cleansing, and verifying data integrity, performing ad-hoc analysis, creating automated anomaly detection systems, leading project meetings, managing multiple development designs, providing customer training, participating in internal projects, and having hands-on experience with data models and data warehousing technologies. You must be organized, analytical, have excellent communication skills, and be proficient in using query languages. Key Responsibilities: - Select features, build and optimize classifiers using machine learning techniques - Perform data mining using state-of-the-art methods - Enhance data collection procedures and ensure data integrity - Process, cleanse, and verify data for analysis - Conduct ad-hoc analysis and present results clearly - Create automated anomaly detection systems - Lead project meetings and manage multiple development designs - Provide customer training and participate in internal projects - Have hands-on experience with data models and data warehousing technologies - Possess good communication skills and proficiency in using query languages Qualifications Required: - Excellent understanding of machine learning techniques and algorithms - Experience with common data science toolkits such as R, Weka, NumPy, MatLab - Experience in SAS, Oracle Advanced Analytics, SPSS will be an advantage - Proficiency in using query languages such as SQL, Hive, Pig - Experience with NoSQL databases such as MongoDB, Cassandra, HBase will be an advantage - Good applied statistics skills and scripting/programming skills - Bachelor/Master in Statistics/Economics or MBA desirable - BE/BTech/MTech with 2-4 years of experience in data science Additional Details: The company values skills such as communication, coaching, building relationships, client service passion, curiosity, teamwork, courage, integrity, technical expertise, openness to change, and adaptability. (Note: The company did not provide any additional details in the job description.),
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
You have a great opportunity to join us as a Software Engineer / Senior Software Engineer / System Analyst in Chennai with 4-7 years of experience. We are looking for a candidate with expertise in Database testing, ETL testing, and Agile methodology. As a part of our team, your responsibilities will include test planning & execution, working with Agile methodology, and having a minimum of 4-6 years of testing experience. Experience in auditing domain would be a plus. You should have strong application analysis, troubleshooting, and behavior skills along with extensive experience in Manual testing. Experience in Automation scripting would be an added advantage. You will also be responsible for leading discussions with Business, Development, and vendor teams for testing activities such as Defect Coordinator and test scenario reviews. Strong communication skills, both verbal and written, are essential for this role. You should be able to work effectively independently as well as with onshore and offshore teams. In addition, we are seeking an experienced ETL developer with expertise in Big Data technologies like Hadoop. The required skills include Hadoop (Horton Works), HDFS, Hive, Pig, Knox, Ambari, Ranger, Oozie, TALEND, SSIS, MySQL, MS SQL Server, Oracle, Windows, and Linux. This role may require you to work in 2nd shifts (1pm - 10pm) and excellent English communication skills are a must. If you are interested, please share your profile on mytestingcareer.com and mention your Current CTC, Expected CTC, Notice Period, Current Location, and Contact number in your response. Don't miss this opportunity to be a part of our dynamic team!,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
This is an opportunity for you to shape the direction of your career and lead multiple teams to success at one of the world's top financial institutions. As a Manager of Software Engineering at JPMorgan Chase in the Employee Platforms team, you will be responsible for overseeing multiple teams and managing daily implementation activities. Your duties will include identifying and escalating issues, ensuring compliance with standards, meeting business requirements, and adhering to tactical best practices. You will be tasked with managing data integration and analysis of disparate systems, building extensible data acquisition and integration solutions, implementing processes to extract, transform, and distribute data across various data stores, and providing problem-solving expertise for complex data analysis. Additionally, you will collaborate with internal product development and cross-functional teams, work with remote teams, and demonstrate strong communication, presentation, interpersonal, and analytical skills. To be successful in this role, you should have formal training or certification in software engineering concepts, at least 5 years of applied experience, and coaching and mentoring skills. Hands-on experience in data integration projects using Big Data Technologies, familiarity with file-storage formats, expertise in Spark, Scala, Kafka, and CICD tools, as well as knowledge of Java open-source, API, Spring Boot applications, and data querying tools like Pig, Hive, and Impala are essential. Strong understanding of Data Warehousing, Data Lake concepts, and experience with Oracle/SQL and NoSQL data stores are required. Preferred qualifications include experience in AWS Data Warehousing and database platforms, ability to adapt to new technologies quickly, and the capability to initiate and implement innovative solutions to business challenges.,
Posted 6 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Data Engineer Trainer/Big Data Trainer, you will be responsible for imparting knowledge and training on various technical aspects related to data engineering and big data. Your key responsibilities will include expertise in Data Mining and ETL Operations/Tools. It is crucial to have a deep understanding and knowledge of HDFS, Hadoop System, Map Reduce, RDD, Spark DataFrame, PySpark along with related concepts. You should also have experience in using Business Intelligence Tools such as Tableau, Power BI, and Big Data Frameworks like Hadoop and Spark. Proficiency in Pig, Hive, Sqoop, and Kafka is essential for this role. Knowledge of AWS and/or Azure, especially with Big Data Stack, will be an added advantage. You should possess a high level of proficiency in Standard Database skills like SQL, NoSQL Databases, Data Preparation, Cleaning, and Wrangling/Munging. Having a strong foundation and advanced level understanding of Statistics, R Programming, Python, and Machine Learning is necessary to excel in this role.,
Posted 6 days ago
8.0 - 13.0 years
20 - 25 Lacs
hyderabad
Work from Office
Our Data Engineers work with Data Scientists, Project Leads, Managers on implementation, upgrade, and migration projects. Key Responsibilities Analyzing raw data Developing and maintaining datasets Improving data quality and efficiency Create solution and design documentation Work on projects independently as well as being part of a large team Develop internal training, process and best practices Crosstrain Junior Data Engineers or other team members with your area of expertise Further develop skills both on the job and through formal learning channels Assist in pre-sales activities by providing accurate work estimate Interacts closely with Project Management to deliver projects that are done on time and on budget. Technical Skills: • Python (pydata, pandas, numpy, pyspark) • SQL (MS SQL, OracleDB, Terradata) • Azure Data Factory • Azure Data Bricks • Big Data (Spark, pig, hive, scoop, kafka etc.) • DevOps (using tools such as GITHUB Actions and Jenkins is preferred) • Agile/Scrum • Rest Services and API Management: o Implementing API proxies through gateways using Apigee X and/or Apigee Edge o API design, development, and testing including creating SWAGGER/Open API specs TTEC Digital India ttecdigital.com Education, Experience and Certification • Post-Secondary Degree (or Diploma) related to Computer Science, MIS or IT-related field. BA/BS in unrelated field will also be considered depending on experience • 5-8 years in Data Engineering • 3+ years of application design and development experience in a cloud environment • 2+ years of experience building and deploying containerized applications in a Kubernetes enabled environment • 2+ years of experience coding REST services and APIs using one or more of the following: Python, C#, Node.js, Java • Certified Kubernetes Application Developer • Google Cloud Certified Apigee API Engineer Organizational Description At TTEC Digital everything we do, every day, helps our clients fuel exceptional experiences for their customers. Together, we help our clients develop strategic customer experience design, integrate powerful data, and orchestrate industry-leading technology. We embrace the unique, positive, and healthy cultures that our different teams bring, and our leadership prioritizes the promise of work/life balance, continuous education, and high-performance incentives. We have been and will remain a remote first employer, giving you the flexibility to take your career wherever life goes.
Posted 1 week ago
6.0 - 10.0 years
20 - 25 Lacs
pune
Work from Office
Exp working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Extensive experience in software development, scripting and project management Exp using system monitoring tools (e.g. Grafana, Ganglia)
Posted 1 week ago
3.0 - 5.0 years
5 - 15 Lacs
pune, bengaluru
Work from Office
Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks leveraging open source tools and data processing frameworks. • Hands-on on technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, Hadoop Platform, Hive, airflow • Experience in GCP Cloud Composer, Big Query, DataProc • Offer system support as part of a support rotation with other team members. • Operationalize open-source data-analytic tools for enterprise use. • Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification. • Understand and follow the company development lifecycle to develop, deploy and deliver
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Software Engineer specializing in Big Data technologies, you will leverage your proven experience to drive impactful software solutions. Your hands-on expertise with a wide array of tools such as Hadoop, Hive, Pig, Oozie, MapReduce, Spark, Sqoop, Kafka, and Flume will be instrumental in developing cutting-edge software applications. Your role will involve extensive software development, scripting, and project management, ensuring the successful execution of projects. You will demonstrate proficiency in utilizing system monitoring tools like Grafana and Ganglia, along with automated testing frameworks to streamline the development process. Your knowledge of programming languages such as Python, Java, Scala, and C++ will be essential for creating efficient solutions. Moreover, your familiarity with relational databases like PostgreSQL, MySQL, and NoSQL databases like MongoDB will enable you to design robust software systems. Operating across various platforms including Linux, Mac OS, and Windows, you will apply your analytical mindset and problem-solving skills to address complex challenges. Your ability to work independently and collaborate effectively with internal teams and vendors will be crucial in optimizing product quality. You will play a key role in developing high-quality software design, automating tasks, reviewing code, and conducting validation and verification testing. Furthermore, your role will involve documenting development phases, monitoring systems, and ensuring that software remains up-to-date with the latest technologies. Your excellent organizational and leadership skills will be pivotal in driving the success of projects. This is a full-time position that requires in-person work at the designated location. If you are seeking a challenging opportunity to make a significant impact in the realm of Big Data technology, this role offers a dynamic environment where your skills and expertise will be valued and utilized to drive innovation and excellence in software development.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Intermediate Programmer Analyst position at our organization involves working at an intermediate level to assist in the development and implementation of new or updated application systems and programs in collaboration with the Technology team. Your main responsibility will be to contribute to application systems analysis and programming activities. You will be expected to utilize your knowledge of applications development procedures and concepts, as well as basic knowledge of other technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing code, and consulting with users, clients, and other technology groups to recommend programming solutions. Additionally, you will be involved in installing and supporting customer exposure systems and applying programming languages for design specifications. As an Applications Development Intermediate Programmer Analyst, you will also be responsible for analyzing applications to identify vulnerabilities and security issues, conducting testing and debugging, and serving as an advisor or coach to new or lower-level analysts. You should be able to identify problems, analyze information, and make evaluative judgments to recommend and implement solutions with a limited level of direct supervision. Furthermore, you will play a key role in resolving issues by selecting solutions based on your technical experience and guided by precedents. You will have the opportunity to exercise independence of judgment and autonomy, act as a subject matter expert to senior stakeholders and team members, and appropriately assess risk when making business decisions. To qualify for this role, you should have 4-8 years of relevant experience in the Financial Service industry, intermediate level experience in an Applications Development role, clear and concise written and verbal communication skills, problem-solving and decision-making abilities, and the capacity to work under pressure and manage deadlines or unexpected changes in expectations or requirements. A Bachelor's degree or equivalent experience is required for this position. In addition to the responsibilities outlined above, the ideal candidate should possess expertise in various technical areas, including strong JAVA programming skills, object-oriented programming, data structures, design patterns, Spark frameworks like flask and Django, Big Data technologies such as Pyspark and Hadoop ecosystem components, and REST web services. Experience in Spark performance tuning, PL SQL, SQL, Transact-SQL, data processing in different file types, UI frameworks, source code management tools like git, Agile methodology, and issue trackers like Jira is highly desirable. This job description offers a comprehensive overview of the role's responsibilities and qualifications. Please note that other job-related duties may be assigned as necessary. If you require a reasonable accommodation due to a disability to use our search tools or apply for a career opportunity, please review our Accessibility at Citi. For additional information, you can view Cit's EEO Policy Statement and the Know Your Rights poster.,
Posted 1 week ago
1.0 - 7.0 years
0 Lacs
maharashtra
On-site
We have a job opening for GenAI Team Lead at PibyThree Consulting Services Pvt Ltd in Mumbai. The company is a Global Cloud Consulting & Services provider specializing in Cloud Transformation, Cloud FinOps, IT Automation, Application Modernization, and Data & Analytics. PibyThree aims to make businesses successful by leveraging technology for automation and enhanced productivity. As a Gen AI lead at PibyThree, you will be responsible for building both Traditional AI/ML systems and Generative AI-based systems. The ideal candidate should have a total experience of 5 to 7 years with hands-on experience in building language models, machine learning, and AI models using industry tools, products, and Azure cognitive services. Key Responsibilities: - Highly skilled in programming with the ability to assess, analyze, and organize large amounts of data - Expertise in large language models (LLMs/LSTMs/BERT) and exposure to Anthropic AI, OpenAI, among others - Experience in building, customizing, and fine-tuning AI models through GenAI studios (OpenAI, Vertex, or Bedrock) extended via Azure, AWS, or GCP for rapid PoCs - Proficiency in completions, embeddings, edits, transcriptions, translations, and moderations across various large language models - Knowledge of LlamaIndex and LangChain - Hands-on training and fine-tuning experience with distributed OpenAI models such as DaVinci, curie, Babbage, ADA - Programming skills in R, Python, SQL, Hive, Pig, and familiarity with Scala, Java, or C++ - Deep experience in applying machine learning algorithms with a strong data science and data engineering background - Understanding of cloud technology and Cognitive Services including Vision, Speech, Language, Decision, and Search - Previous projects leveraging GenAI services like Azure OpenAI, Bedrock, or Vertex - Knowledge of text/content representation techniques, statistics, and classification algorithms Good to have: - Exposure to Dataiku - Exposure to Snowflake and/or Databricks - AWS/Azure/Google Machine Learning certifications Requirements: - Bachelor's degree preferred - 5 years of total work experience with specific experience in Artificial Intelligence (2 years), Generative AI (1 year), and Machine Learning (3 years) - Generative AI Certification preferred This is a full-time position with an in-person work location and a day shift schedule. If you meet the qualifications and are interested in this role, please fill out the Google Form provided in the job description.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
As a skilled professional with over 7 years of experience, you will be responsible for reviewing and understanding business requirements to ensure timely completion of development tasks with rigorous testing to minimize defects. Collaborating with a software development team is crucial to implement best practices and enhance the performance of Data applications, meeting client needs effectively. In this role, you will collaborate with various teams within the company and engage with customers to comprehend, translate, define, and design innovative solutions for their business challenges. Your tasks will also involve researching new Big Data technologies to evaluate their maturity and alignment with business and technology strategies. Operating within a rapid and agile development process, you will focus on accelerating speed to market while upholding necessary controls. Your qualifications should include a BE/B.Tech/MCA degree with a minimum of 6 years of IT experience, including 4 years of hands-on experience in design and development using the Hadoop technology stack and various programming languages. Furthermore, you are expected to have proficiency in multiple areas such as Hadoop, HDFS, MR, Spark Streaming, Spark SQL, Spark ML, Kafka/Flume, Apache NiFi, Hortonworks Data Platform, Hive, Pig, Sqoop, NoSQL Databases (HBase, Cassandra, Neo4j, MongoDB), Visualization & Reporting frameworks (D3.js, Zeppelin, Grafana, Kibana, Tableau, Pentaho), Scrapy for web crawling, Elastic Search, Google Analytics data streaming, and Data security protocols (Kerberos, Open LDAP, Knox, Ranger). A strong knowledge of the current technology landscape, industry trends, and experience in Big Data integration with Metadata Management, Data Quality, Master Data Management solutions, structured/unstructured data is essential. Your active participation in the community through articles, blogs, or speaking engagements at conferences will be highly valued in this role.,
Posted 2 weeks ago
6.0 - 12.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As an experienced Data Engineer with 6-12 years of experience, you will be responsible for working with Big Data technologies, programming languages, ETL & Data Engineering, Cloud Platforms, and various tools & frameworks. Your expertise in Scala, Spark, PySpark, Python, SQL, Hadoop, Hive, Pig, MapReduce, and other related skills will be crucial in this role. You must have a strong background in Data Warehouse Design, ETL, Data Analytics, Data Mining, Data Cleansing, and experience in creating data pipelines, optimization, troubleshooting, and data validation. Proficiency in cloud platforms such as GCP and Azure, along with tools like Apache Hadoop, Airflow, Kubernetes, and Containers, will be essential for success in this position. In addition to your technical skills, you should have a solid work experience in Data Warehouse and Big Data technologies, with hands-on experience in Scala, Spark, PySpark, Python, and SQL. Experience in strategic data planning, governance, standard procedures, and working in Agile environments will also be beneficial. It would be advantageous if you have experience in Data Analytics, Machine Learning, and optimization, along with an understanding of Java, ReactJS, Node.js, and managing big data workloads in containerized environments. Your ability to analyze large datasets and optimize data workflows will be highly valued in this role. If you possess the required skills in Scala, Spark, Airflow, Big Data, and GCP, and are looking to contribute your expertise in a dynamic and challenging environment, we encourage you to apply for this exciting opportunity.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
nagpur, maharashtra
On-site
As a Big Data/Hadoop Trainer, your main responsibility will be conducting training sessions on Big Data and Hadoop. You will be tasked with creating assignments and projects that are centered around Hadoop to enhance the learning experience of the participants. To excel in this role, you should have a minimum of 2 years of hands-on experience in Hadoop/Big Data Technology within the corporate sector. Your profile should showcase excellent knowledge of Hadoop, Big Data, HDFS, MapReduce, Pig, Hive, Sqoop, ZooKeeper, Hbase, and Java. Additionally, possessing strong communication and presentation skills is essential in effectively delivering the training content. We are looking for candidates with dynamic personalities who can engage and inspire the participants during the training sessions. If you meet these criteria and are passionate about sharing your knowledge in Big Data and Hadoop, we have weekend positions available for working faculties that could be a perfect fit for you.,
Posted 2 weeks ago
5.0 - 7.0 years
11 - 21 Lacs
mumbai
Work from Office
Looking for immediate joiner - Mumbai (work from office) Roles and Responsibilities: Programming Skills knowledge of statistical programming languages like R, Python, and database query languages like SQL, Hive, Pig is desirable. Familiarity with Scala, Java, or C++ is an added advantage. Data mining or extracting usable data from valuable data sources Carrying out the preprocessing of structured and unstructured data Enhancing data collection procedures to include all relevant information for developing analytic systems Processing, cleansing, and validating the integrity of data to be used for analysis Analyzing large amounts of information to find patterns and solutions Data Wrangling proficiency in handling imperfections in data is an important aspect of a data scientist job description.
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
kolkata, west bengal
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As a Senior Data Engineer at EY's GDS Assurance Digital, your mission is to develop, implement, and integrate technology solutions that better serve audit clients and engagement teams. You will have the opportunity to develop a deep audit-related technical knowledge and outstanding database, data analytics, and programming skills. The role involves working side-by-side with the firm's partners, clients, and audit technical subject matter experts to develop and incorporate technology solutions that enhance value-add, improve efficiencies, and enable clients with disruptive and market-leading tools supporting Assurance. You will be part of a team that provides solution architecture, application development, testing, and maintenance support to the global Assurance service line. Key Responsibilities: - Gathering, organizing, and analyzing data to meet ever-increasing regulations - Working on Microsoft technology-based projects for customers globally - Utilizing SQL, NoSQL databases such as HBase/Cassandra/MongoDB, and Big Data querying tools like Pig and Hive - Implementing ETL processes using tools like Alteryx or Azure Data Factory - Experience in NiFi and reporting tools like Power BI/Tableau/Spotfire - Understanding complex concepts and using technology for data modeling, analysis, visualization, or process automation - Working within a multi-disciplinary team structure and independently - Demonstrating analytical and systematic problem-solving skills - Communicating technical information effectively to diverse audiences - Planning, scheduling, and monitoring work activities to meet targets - Absorbing new technical information and applying it effectively - Maintaining client relationships, business development, and software development best practices Qualifications: - Bachelor's degree in Computer Science, Engineering, Information Systems Management, Accounting, Finance, or related field - 6 to 9 years of industry experience - Strong technical skills including SQL, NoSQL databases, Big Data querying tools, ETL implementation, NiFi, and reporting tools - Analytical and decision-making capabilities - Strong customer focus, teamwork, and problem-solving skills Join EY in building a better working world, creating long-term value for clients, people, and society, and fostering trust in the capital markets. EY teams across the globe provide trust through assurance and help clients grow, transform, and operate in a rapidly evolving business landscape.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You are ready to gain the skills and experience needed to grow within your role and advance your career, and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorgan Chase within the Employee Platforms team, you will be part of an agile team dedicated to enhancing, designing, and delivering the software components of the firm's cutting-edge technology products in a secure, stable, and scalable manner. In your role as an emerging member of the software engineering team, you will execute software solutions by designing, developing, and technically troubleshooting various components within a technical product, application, or system, while acquiring the skills and experience necessary for growth in your position. Job responsibilities include managing data integration and data analysis of disparate systems, building extensible data acquisition and integration solutions, implementing processes and logic to extract, transform, and distribute data across one or more data stores, providing problem-solving expertise, complex analysis of data, and developing business intelligence integration designs. You will interface with other internal product development teams and cross-functional teams, work with remote and geographically distributed teams, and demonstrate excellent communication, presentation, interpersonal, and analytical skills. Required qualifications, capabilities, and skills for this role include formal training or certification on software engineering concepts with 2+ years of applied experience, hands-on experience in data integration projects using Big Data Technologies related to human resources analytics, strong experience in CICD using Jenkins, Git, Artifactory, Yaml, Maven for Cloud deployments, hands-on experience with a minimum of 3 years on Spark and Scala, good knowledge of Big Data querying tools like Pig, Hive, and Impala, strong experience in integrating data from different types of file-storage formats, strong technical understanding in building scalable, high-performance distributed services/systems, knowledge of Data Warehousing and Data Lake concepts, experience with Java open-source and API standards, strong problem-solving, troubleshooting, and analytical skills, and experience in technologies like Oracle/SQL and NoSQL data stores such as DynamoDB. Preferred qualifications, capabilities, and skills include familiarity with modern front-end technologies, experience in AWS Data Warehousing and database platforms, exposure to cloud technologies, and the ability to quickly learn new technologies in a dynamic environment.,
Posted 2 weeks ago
5.0 - 9.0 years
13 - 23 Lacs
bengaluru
Hybrid
The Role Develops and program methods, automated processes, and systems to cleanse, integrate and analyze structured and unstructured, diverse big data sources to generate actionable insights and solutions using machine learning and advanced analytics . Interprets and communicates insights and findings from analyses and experiments to other analysts, data scientists, team members and business partners. The Main Responsibilities Support the development of end-to-end analytics solutions by assisting in the design and implementation of solutions that cover the entire data science lifecycle, including data discovery, cleaning, exploratory data analysis, model building, and deployment. Assist with operationalizing models and participate in the iterative process of refining models and insights based on feedback and business requirements. Analyze data and build predictive, prescriptive, and advanced analytical models in various areas including capacity planning, effect/anomaly detection, predictive asset failure/maintenance, workload optimization, customer segmentation and business performance. Gain direct experience with various modeling techniques such as clustering, regression, and time series forecasting, applying these techniques to generate actionable insights and recommendations. Mine information for previously unknown patterns and insights hidden in these assets and leverage them for competitive advantage. Create compelling data visualizations and dashboards to effectively communicate findings to both technical and non-technical audiences. Present insights in a clear, concise, and actionable manner. Collaborate within and across cross-functional teams, working closely with data engineers, data scientists, and business stakeholders to understand business problems, gather requirements, and communicate insights effectively. Contribute to collaborative problem-solving sessions and agile development processes. Develop and operationalize end-to-end machine learning pipelines on Databricks , including feature engineering, model training, evaluation, and deployment. Implement and manage MLOps practices , integrating Git for version control, CI/CD pipelines for model deployment, and automated monitoring of models in production. Develop and consume RESTful APIs for data integration , enabling seamless connectivity between analytics applications and external systems. Ensure reproducibility, auditability, and governance of data science models by adhering to enterprise MLOps standards and frameworks. Support analytics democratization by packaging models as reusable components and APIs for consumption across the enterprise. What We Look for in a Candidate Able to apply techniques such as classification, clustering, regression, deep learning, association, anomaly detection, time series forecasting, Hidden Markov models and Bayesian inference to solve pragmatic business problems. Able to design working models and implement them on Big Data systems using Map Reduce or Spark frameworks . Familiar with Hadoop, Pig, Hive, Scope, Cosmos, or similar technologies . Able to work within an agile, iterative DevOps development process. Experience: 3+ years of experience delivering Machine Learning and Advanced Analytics solutions Experience with statistical programming environments like Python, R, SPSS, or IBM Watson Studio Experience building data models and performing complex queries using SQL Experience performance tuning large datasets Experience building large data pipelines and/or web services Experience developing visualization and dashboards using PowerBI or similar tools Fluent in one or more object-oriented languages like C#, C++, Scala, Java, and scripting languages like Python or Ruby "We are an equal opportunity employer committed to fair and ethical hiring practices. We do not charge any fees or accept any form of payment from candidates at any stage of the recruitment process. If anyone claims to offer employment opportunities in our company in exchange for money or any other benefit, please treat it as fraudulent and report it immediately."
Posted 3 weeks ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Platform Engineer - Tech Lead at Deutsche Bank in Pune, India, you will be part of the DB Technology global team of tech specialists. Your role involves leading a group of engineers working on cutting-edge technologies in Hadoop, Big Data, GCP, Terraform, Big Query, Data Proc, and data management to develop robust data pipelines, ensure data quality, and implement efficient data management solutions. Your leadership will drive innovation, maintain high standards in data infrastructure, and mentor team members to support data-driven initiatives. You will collaborate with data engineers, analysts, cross-functional teams, and stakeholders to ensure the data platform meets the organization's needs. Your responsibilities include working on a hybrid data platform to unlock new insights and drive business growth. You will contribute to all stages of software delivery, from initial analysis to production support, within a cross-functional agile delivery team. Key Responsibilities: - Lead a cross-functional team in designing, developing, and implementing on-prem and cloud-based data solutions. - Provide technical guidance and mentorship to foster continuous learning and improvement. - Collaborate with product management and stakeholders to define technical requirements and establish delivery priorities. - Architect and implement scalable, efficient, and reliable data management solutions for complex data workflows and analytics. - Evaluate tools, technologies, and best practices to enhance the data platform. - Drive adoption of microservices, containerization, and serverless architectures. - Establish and enforce best practices in coding, testing, and deployment. - Oversee code reviews and provide feedback to promote code quality and team growth. Skills and Experience: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - 7+ years of software engineering experience with a focus on Big Data and GCP technologies. - Strong leadership skills with experience in mentorship and team growth. - Expertise in designing and implementing data pipelines, ETL processes, and real-time data processing. - Hands-on experience with Hadoop ecosystem tools and Google Cloud Platform services. - Understanding of data quality management and best practices. - Familiarity with containerization and orchestration tools. - Strong problem-solving and communication skills. Deutsche Bank offers a culture of continuous learning, training, and development to support your career progression. You will receive coaching and support from experts in your team and benefit from a range of flexible benefits tailored to your needs. Join us in creating innovative solutions and driving business growth at Deutsche Bank.,
Posted 3 weeks ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
The ideal candidate should be highly interested and available immediately. Please submit your resume along with your total experience, current CTC, notice period, and current location details to Nitin.patil@ust.com. You will be responsible for designing, developing, and optimizing data pipelines and ETL workflows. Your work will involve collaborating with Apache Hadoop, Airflow, Kubernetes, and Containers to streamline data processing. Additionally, you will implement data analytics and mining techniques to derive valuable business insights. Managing cloud-based big data solutions on GCP and Azure will also be part of your job. Lastly, you will troubleshoot Hadoop log files and utilize multiple data processing engines for scalable data solutions. To excel in this role, you must possess proficiency in Scala, Spark, PySpark, Python, and SQL. Hands-on experience with the Hadoop ecosystem, Hive, Pig, and MapReduce is essential. Previous experience in ETL, Data Warehouse Design, and Data Cleansing will be highly beneficial. Familiarity with data pipeline orchestration tools like Apache Airflow is required. Knowledge of Kubernetes, Containers, and cloud platforms such as GCP and Azure is also necessary. If you are a seasoned big data engineer with a passion for Scala and cloud technologies, we encourage you to apply for this exciting opportunity.,
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
delhi
On-site
As an experienced data analytics professional with 1 to 2 years of experience, you will be responsible for developing and implementing data analytics methodologies. Your role will require good interpersonal skills along with excellent communication abilities. Your technical skills must include proficiency in Python, machine learning, deep learning, data wrangling, and integration with Big Data tools such as Hadoop, Scoop, Impala, Hive, Pig, and Spark R. You should also have a solid understanding of statistics, data mining, algorithms, time series analysis, forecasting, SQL queries, and Tableau data visualization. Having a good grasp of technologies like Hadoop, HBase, Hive, Pig, Mapreduce, Python, R, Java, Apache Spark, Impala, and machine learning algorithms is essential for this role. Your responsibilities will involve developing training content on Big Data and Hadoop technologies for students, working professionals, and corporates. You will conduct both online and classroom training sessions, provide practical use cases and assignments, and design self-paced recorded training sessions. It's important to continuously enhance teaching methodologies for an effective online learning experience and work collaboratively in small teams to make a significant impact. You will be tasked with designing and overseeing the development of real-time projects to provide practical exposure to the trainees. Additionally, you may work as a consultant or architect in developing and training real-time Big Data applications for corporate clients either on a part-time or full-time basis. Hands-on knowledge of tools like Anaconda Navigator, Jupyter Notebook, Hadoop, Hive, Pig, Mapreduce, Apache Spark, Impala, SQL, and Tableau will be required to excel in this role.,
Posted 1 month ago
4.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
As a Big Data Architect with 4 years of experience, you will be responsible for designing and implementing scalable solutions using technologies such as Spark, Scala, Hadoop MapReduce/HDFS, PIG, HIVE, and AWS cloud computing. Your role will involve hands-on experience with tools like EMR, EC2, Pentaho BI, Impala, ElasticSearch, Apache Kafka, Node.js, Redis, Logstash, statsD, Ganglia, Zeppelin, Hue, and KETTLE. Additionally, you should have sound knowledge in Machine learning, Zookeeper, Bootstrap.js, Apache Flume, FluentD, Collectd, Sqoop, Presto, Tableau, R, GROK, MongoDB, Apache Storm, and HBASE. To excel in this role, you must have a strong background in development with both Core Java and Advanced Java. A Bachelor's degree in Computer Science, Information Technology, or MCA is required along with 4 years of relevant experience. Your analytical and problem-solving skills will be put to the test as you tackle complex data challenges. Attention to detail is crucial, and you should possess excellent written and verbal communication skills. This position requires you to work independently while also being an effective team player. With 10 years of overall experience, you will be based in either Pune or Hyderabad, India. Join us in this dynamic role where you will have the opportunity to contribute to cutting-edge data architecture solutions.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As an ETL Testing & Big Data professional, you will be responsible for designing and implementing ETL test strategies based on business requirements. Your role involves reviewing and analyzing ETL source code, as well as developing and executing test plans and test cases for ETL processes. Data validation and reconciliation using SQL queries will be a key aspect of your responsibilities. Monitoring ETL jobs, resolving issues affecting data accuracy, and performing performance testing on ETL processes to focus on optimization are crucial tasks in this role. Ensuring data quality and integrity across various data sources, along with coordinating with development teams to troubleshoot issues and suggest improvements, are essential for success. You will be expected to utilize automation tools to enhance the efficiency of testing processes and conduct regression testing after ETL releases or updates. Documenting test results, issues, and proposals for resolution, as well as providing support to business users regarding data-related queries, are integral parts of your responsibilities. Staying updated with the latest trends in ETL testing and big data technologies, working closely with data architects to ensure effective data modeling, and participating in technical discussions to contribute to knowledge sharing are key aspects of this role. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - 3+ years of experience in ETL testing and big data environments. - Strong proficiency in SQL and data modeling techniques. - Hands-on experience with Hadoop ecosystem and related tools. - Familiarity with ETL tools such as Informatica, Talend, or similar. - Experience with data quality frameworks and methodologies. - Knowledge of big data technologies like Spark, Hive, or Pig. - Excellent analytical and problem-solving skills. - Proficient communication skills for effective collaboration. - Ability to manage multiple tasks and meet deadlines efficiently. - Experience in Java or scripting languages is a plus. - Strong attention to detail and a commitment to delivering quality work. - Certifications in data management or testing are a plus. - Ability to work independently and as part of a team. - Willingness to adapt to evolving technologies and methodologies. Skills required: - Scripting languages - Data modeling - Data quality frameworks - Hive - Talend - Analytical skills - SQL - Performance testing - Automation tools - Pig - Hadoop ecosystem - ETL testing - Informatica - Hadoop - Data quality - Big data - Java - Regression testing - Spark,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will also be required to monitor and control all phases of the development process, provide user and operational support on applications to business users, and recommend and develop security measures in post-implementation analysis. As the Applications Development Senior Programmer Analyst, you will utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business and system processes, recommend advanced programming solutions, and ensure that essential procedures are followed. Additionally, you will serve as an advisor or coach to new or lower-level analysts, operate with a limited level of direct supervision, and act as a subject matter expert to senior stakeholders and other team members. To qualify for this role, you should have 8-12 years of relevant experience in systems analysis and programming of software applications, managing and implementing successful projects, and working knowledge of consulting/project management techniques/methods. You should also have the ability to work under pressure, manage deadlines, and adapt to unexpected changes in expectations or requirements. A Bachelor's degree or equivalent experience is required for this position. In addition to the general job description, the ideal candidate should have 8 to 12 years of Application development experience through the full lifecycle with expertise in UI architecture patterns such as Micro Frontend and NX. Proficiency in Core Java/J2EE Application, Data Structures, Algorithms, Hadoop, Map Reduce Framework, Spark, YARN, and other relevant technologies is essential. Experience with Big Data Spark ecosystem, ETL, BI tools, agile environment, test-driven development, and optimizing software solutions for performance and stability is also preferred. This job description provides an overview of the responsibilities and qualifications for the Applications Development Senior Programmer Analyst role. Other job-related duties may be assigned as required.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
You have experience in ETL testing and are familiar with Agile methodology. With a minimum of 4-6 years of testing experience in test planning & execution, you possess working knowledge in Database testing. It would be advantageous if you have prior experience in the auditing domain. Your strong application analysis, troubleshooting, and behavioral skills along with extensive experience in manual testing will be valuable. While experience in Automation scripting is not mandatory, it would be beneficial. You are adept at leading discussions with Business, Development, and vendor teams for testing activities such as Defect Coordinator and test scenario reviews. Your excellent verbal and written communication skills enable you to effectively communicate with various stakeholders. You are capable of working independently and collaboratively with onshore and offshore teams. The role requires an experienced ETL developer with proficiency in Big Data technologies like Hadoop. Key Skills Required: - Hadoop (Horton Works), HDFS - Hive, Pig, Knox, Ambari, Ranger, Oozie - TALEND, SSIS - MySQL, MS SQL Server, Oracle - Windows, Linux Being open to working in 2nd shifts (1pm - 10pm) is essential for this role. Your excellent English communication skills will be crucial for effective collaboration. If you are interested, please share your profile on mytestingcareer.com. When responding, kindly include your Current CTC, Expected CTC, Notice Period, Current Location, and Contact number.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You have over 5 years of experience in building production-grade Neural Network models using Computer Vision or Natural Language Processing techniques. You possess a strong understanding of various machine learning techniques and algorithms, including k-NN, Naive Bayes, SVM, Decision Forests, and Neural Networks. Experience with Deep Learning frameworks like TensorFlow, PyTorch, and MxNet is part of your skillset. Proficiency in common data science toolkits such as R, Sklearn, NumPy, MatLab, and MLib is highly desirable. Additionally, you have solid applied statistics skills encompassing distributions, statistical testing, and regression. Your expertise extends to using query languages like SQL, Hive, Pig, and NoSQL databases. In this role, you will collaborate with Product Managers, Architects, and Engineering Leadership to conceptualize, strategize, and develop new products focused on AI/ML initiatives. You will be responsible for developing, driving, and executing the long-term vision and strategy for the Data Science team by engaging with multiple teams and stakeholders across the organization. Your tasks will involve architecting, designing, and implementing large-scale machine learning systems. Specifically, you will develop Neural Network models for information extraction from mortgage documents using Computer Vision and NLP techniques. Ad-hoc analysis and clear presentation of results to various audiences and key stakeholders will be part of your routine. You will also design experiments, test hypotheses, and build models while conducting advanced data analysis and highly complex algorithm designs. Applying advanced statistical, predictive, and machine learning modeling techniques to enhance multiple real-time decision systems will be a key aspect of your role. You will collaborate with development teams to deploy models in the production environment to support ML-driven product features. Furthermore, you will define business-specific performance metrics to assess model effectiveness and continuously monitor and enhance these metrics over time for models in a production environment. To qualify for this position, you should hold an M.S. in mathematics, statistics, computer science, or a related field, with a Ph.D. degree being preferred. You must have over 5 years of relevant quantitative and qualitative research and analytics experience. Excellent communication skills and the ability to effectively convey complex topics to a diverse audience are essential attributes for this role.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |