Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
You have experience in ETL testing and are familiar with Agile methodology. With a minimum of 4-6 years of testing experience in test planning & execution, you possess working knowledge in Database testing. It would be advantageous if you have prior experience in the auditing domain. Your strong application analysis, troubleshooting, and behavioral skills along with extensive experience in manual testing will be valuable. While experience in Automation scripting is not mandatory, it would be beneficial. You are adept at leading discussions with Business, Development, and vendor teams for testing activities such as Defect Coordinator and test scenario reviews. Your excellent verbal and written communication skills enable you to effectively communicate with various stakeholders. You are capable of working independently and collaboratively with onshore and offshore teams. The role requires an experienced ETL developer with proficiency in Big Data technologies like Hadoop. Key Skills Required: - Hadoop (Horton Works), HDFS - Hive, Pig, Knox, Ambari, Ranger, Oozie - TALEND, SSIS - MySQL, MS SQL Server, Oracle - Windows, Linux Being open to working in 2nd shifts (1pm - 10pm) is essential for this role. Your excellent English communication skills will be crucial for effective collaboration. If you are interested, please share your profile on mytestingcareer.com. When responding, kindly include your Current CTC, Expected CTC, Notice Period, Current Location, and Contact number.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You have over 5 years of experience in building production-grade Neural Network models using Computer Vision or Natural Language Processing techniques. You possess a strong understanding of various machine learning techniques and algorithms, including k-NN, Naive Bayes, SVM, Decision Forests, and Neural Networks. Experience with Deep Learning frameworks like TensorFlow, PyTorch, and MxNet is part of your skillset. Proficiency in common data science toolkits such as R, Sklearn, NumPy, MatLab, and MLib is highly desirable. Additionally, you have solid applied statistics skills encompassing distributions, statistical testing, and regression. Your expertise extends to using query languages like SQL, Hive, Pig, and NoSQL databases. In this role, you will collaborate with Product Managers, Architects, and Engineering Leadership to conceptualize, strategize, and develop new products focused on AI/ML initiatives. You will be responsible for developing, driving, and executing the long-term vision and strategy for the Data Science team by engaging with multiple teams and stakeholders across the organization. Your tasks will involve architecting, designing, and implementing large-scale machine learning systems. Specifically, you will develop Neural Network models for information extraction from mortgage documents using Computer Vision and NLP techniques. Ad-hoc analysis and clear presentation of results to various audiences and key stakeholders will be part of your routine. You will also design experiments, test hypotheses, and build models while conducting advanced data analysis and highly complex algorithm designs. Applying advanced statistical, predictive, and machine learning modeling techniques to enhance multiple real-time decision systems will be a key aspect of your role. You will collaborate with development teams to deploy models in the production environment to support ML-driven product features. Furthermore, you will define business-specific performance metrics to assess model effectiveness and continuously monitor and enhance these metrics over time for models in a production environment. To qualify for this position, you should hold an M.S. in mathematics, statistics, computer science, or a related field, with a Ph.D. degree being preferred. You must have over 5 years of relevant quantitative and qualitative research and analytics experience. Excellent communication skills and the ability to effectively convey complex topics to a diverse audience are essential attributes for this role.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
kochi, kerala
On-site
You will be responsible for big data development and support for production deployed applications, analyzing business and functional requirements for completeness, and developing code with minimum supervision. Working collaboratively with team members, you will ensure accurate and timely communication and delivery of assigned tasks to guarantee the end-products" performance upon release to production. Handling software defects or issues within production timelines and SLA is a key aspect of the role. Your responsibilities will include authoring test cases within a defined testing strategy, participating in test strategy development for Configuration and Custom reports, creating test data, assisting in code merge peer reviews, reporting status and progress to stakeholders, and providing risk assessment throughout development cycles. You should have a strong understanding of system and big data strategies/approaches adopted by IQVIA, stay updated on software applications development industry knowledge, and be open to production support roles within the project. To excel in this role, you should have 5-8 years of overall experience, with at least 2-3 years in Big Data, proficiency in Big Data Technologies such as HDFS, Hive, Pig, Sqoop, HBase, and Oozie, strong experience in SQL Queries and Airflow, familiarity with PSql, CI-CD, Jenkins, and UNIX commands, excellent communication skills, comprehensive skills, good confidence level, proven analytical, logical, and problem-solving techniques. Experience in Spark Application Development, ETL, and ELT tools is preferred. Possessing fine-tuned analytical skills, attention to detail, and the ability to work effectively with colleagues from diverse backgrounds is essential. The minimum educational requirement for this position is a Bachelor's Degree in Information Technology or a related field, along with 5-8 years of development experience or an equivalent combination of education, training, and experience. IQVIA is a leading global provider of clinical research services, commercial insights, and healthcare intelligence, facilitating the acceleration of innovative medical treatments" development and commercialization to enhance patient outcomes and population health worldwide. To learn more, visit https://jobs.iqvia.com.,
Posted 3 days ago
3.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
You should have strong experience in PySpark, Python, Unix scripting, SparkSQL, and Hive. You must be proficient in writing SQL queries, creating views, and possess excellent oral and written communication skills. Prior experience in the Insurance domain would be beneficial. A good understanding of the Hadoop Ecosystem including HDFS, Map Reduce, Pig, Hive, Oozie, and Yarn is required. Knowledge of AWS services such as Glue, AWS S3, Lambda function, Step Function, and EC2 is essential. Experience in data migration from platforms like Hive/S3 to Data Bricks is a plus. You should be able to prioritize, plan, organize, and manage multiple tasks efficiently while delivering high-quality work. As a candidate, you should have 6-8 years of technical experience in PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), with at least 3 years of experience in Big Data/ETL using Python, Spark, and Hive, along with 3+ years of experience in AWS. Your primary key skills should include PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), and Big Data with Python, Spark, and Hive experience. Exposure to Big Data migration is also important. Secondary key skills that would be beneficial for this role include Informatica BDM/Power center, Data Bricks, and MongoDB.,
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
Join GlobalLogic as a valuable member of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. Your engagement will involve contributing to the development of end-user modules" firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and analyzing and estimating customer requirements. Requirements - BA / BS degree in Computer Science, Mathematics, or a related technical field, or equivalent practical experience. - Proficiency in Cloud SQL and Cloud Bigtable. - Experience with Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub / Sub, and Genomics. - Familiarity with Google Transfer Appliance, Cloud Storage Transfer Service, and BigQuery Data Transfer. - Knowledge of data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and data processing algorithms (MapReduce, Flume). - Previous experience working with technical customers. - Proficiency in writing software in languages like Java or Python. - 6-10 years of relevant consulting, industry, or technology experience. - Strong problem-solving and troubleshooting skills. - Excellent communication skills. Job Responsibilities - Hands-on experience working with data warehouses, including technical architectures, infrastructure components, ETL / ELT, and reporting / analytic tools. - Experience in technical consulting. - Proficiency in architecting and developing software or internet-scale Big Data solutions in virtualized environments like Google Cloud Platform (mandatory) and AWS / Azure (good to have). - Familiarity with big data, information retrieval, data mining, machine learning, and building high availability applications with modern web technologies. - Working knowledge of ITIL and / or agile methodologies. - Google Data Engineer certification. What We Offer - Culture of caring: Prioritize a culture of caring, where people come first, fostering an inclusive environment of acceptance and belonging. - Learning and development: Commitment to continuous learning and growth, offering various programs, training curricula, and hands-on opportunities for personal and professional advancement. - Interesting & meaningful work: Engage in impactful projects that allow for creative problem-solving and exploration of new solutions. - Balance and flexibility: Embrace work-life balance with diverse career areas, roles, and work arrangements to support personal well-being. - High-trust organization: Join a high-trust organization with a focus on integrity, trustworthiness, and ethical practices. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with forward-thinking companies to create innovative digital products and experiences. Join the team in transforming businesses and industries through intelligent products, platforms, and services, contributing to cutting-edge solutions that shape the world today.,
Posted 5 days ago
6.0 - 11.0 years
22 - 27 Lacs
Pune, Bengaluru
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 6 days ago
15.0 - 19.0 years
0 Lacs
karnataka
On-site
Publicis Sapient is seeking a Principal Data Scientist to join its Data Science practice. In this role, you will act as a trusted advisor to clients, driving innovation in applied machine learning and statistical analysis. You will also lead efforts to enhance the group's capabilities for the future. Your responsibilities will include leading teams to develop data-driven solutions driven by learning algorithms, educating teams on problem-solving models in machine learning, and translating objectives into data-driven solutions. You will work with diverse data sets, cutting-edge technology, and witness your insights translating into tangible business results regularly. Your role will play a crucial part in integrating machine learning into core market offerings such as eCommerce, advertising, AdTech, and business transformation. You will also direct analyses to enhance the effectiveness of marketing tactics and collaborate with leaders in various Publicis Sapient divisions to ensure data-driven solutions are brought to the market. Key areas of focus will be customer segmentations, media and advertising optimization, recommender systems, fraud analytics, personalization, and forecasting. Your Impact: - Design and implement analytical models to support product and project objectives. - Research and innovate to develop next-generation solutions in digital marketing and customer experience. - Provide technical leadership and mentorship in data science. - Enhance machine learning operations platform for Generative AI in various industries. - Drive the application of machine learning in existing project disciplines. - Design experiments to measure changes in user experience. - Segment customers and markets for targeted messaging. - Direct research on analytics platforms to guide solutions. - Ensure solution and code quality through design and code reviews. - Establish standards in machine learning and statistical analysis for consistency and efficiency. - Assess client needs to adopt appropriate approaches for solving challenges. Qualifications: Your Skills & Experience: - Ph.D. in Computer Science, Math, Physics, Engineering, Statistics, or related field. - 15+ years of experience applying statistical learning methods in eCommerce and AdTech. - Proficiency in Gen AI tools and frameworks, LLM finetuning, and LLM ops. - Strong understanding of regression, classification, and cluster analysis approaches. - Proficiency in statistical programming with R, SAS, SPSS, MATLAB, or Python. - Expertise in Python, R, Scala, and SQL programming languages. Benefits of Working Here: - Access to regional benefits. - Gender-Neutral Policy. - 18 paid holidays per year. - Generous parental leave and new parent transition program. - Flexible work arrangements. - Employee Assistance Programs for wellness. Publicis Sapient is a digital transformation partner that helps organizations transition to a digitally enabled state. With a focus on strategy, consulting, customer experience, and agile engineering, the company aims to accelerate clients" businesses by designing products and services that customers truly value. Join a team of over 20,000 people worldwide who are united by core values and a purpose of helping people thrive in the pursuit of the next. Ideal candidates for this role will have experience in traditional AI and recent experience in Gen AI and Agentic AI. Organization, adaptability to change, and hands-on coding skills are highly valued.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Java with Hadoop Developer at Airlinq in Gurgaon, India, you will play a vital role in collaborating with the Engineering and Development teams to establish and maintain a robust testing and quality program for Airlinq's products and services. Your responsibilities will include but are not limited to: - Being part of a team focused on creating end-to-end IoT solutions using Hadoop to address various industry challenges. - Building quick prototypes and demonstrations to showcase the value of technologies such as IoT, Machine Learning, Cloud, Micro-Services, DevOps, and AI to the management. - Developing reusable components, frameworks, and accelerators to streamline the development cycle of future IoT projects. - Operating effectively with minimal supervision and guidance. - Configuring Cloud platforms for specific use-cases. To excel in this role, you should have a minimum of 3 years of IT experience with at least 2 years dedicated to working with Cloud technologies like AWS or Azure. You must possess expertise in designing and implementing highly scalable enterprise applications and establishing continuous integration environments on the targeted cloud platform. Proficiency in Java, Spring Framework, and strong knowledge of IoT principles, connectivity, security, and data streams are essential. Familiarity with emerging technologies such as Big Data, NoSQL, Machine Learning, AI, and Blockchain is also required. Additionally, you should be adept at utilizing Big Data technologies like Hadoop, Pig, Hive, and Spark, with hands-on experience in any Hadoop platform. Experience in workload migration between on-premise and cloud environments, programming with MapReduce and Spark, as well as Java (core Java), J2EE technologies, Python, Scala, Unix, and Bash Scripts is crucial. Strong analytical, problem-solving, and research skills are necessary, along with the ability to think innovatively and independently. This position requires 3-7 years of relevant work experience and is based in Gurgaon. The ideal educational background includes a B.E./B.Tech., M.E./M. Tech. in Computer Science, Electronics Engineering, or MCA.,
Posted 1 week ago
4.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Big Data Lead with 7-12 years of experience, you will be responsible for software development using multiple computing languages. Your role will involve working on distributed data processing systems and applications, specifically in Business Intelligence/Data Warehouse (BIDW) programs. Additionally, you should have previous experience in development through testing, preferably on the J2EE stack. Your knowledge and understanding of best practices and concepts in Data Warehouse Applications will be crucial to your success in this role. You should possess a strong foundation in distributed systems and computing systems, with hands-on engineering skills. Hands-on experience with technologies such as Spark, Scala, Kafka, Hadoop, Hbase, Pig, and Hive is required. An understanding of NoSQL data stores, data modeling, and data management is essential for this position. Good interpersonal communication skills, along with excellent oral and written communication and analytical skills, are necessary for effective collaboration within the team. Experience with Data Lake implementation as an alternative to Data Warehouse is preferred. You should have hands-on experience with Data frames using Spark SQL and proficiency in SQL. A minimum of 2 end-to-end implementations in either Data Warehouse or Data Lake is required for this role as a Big Data Lead.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
The Staff ML Scientist position at Visa offers a unique opportunity to engage in cutting-edge Applied AI research within the realm of data analytics. As a key member of the team, you will play a pivotal role in driving Visa's strategic vision as a leading data-driven company. Your responsibilities will involve formulating complex business problems into technical data challenges, collaborating closely with product stakeholders to ensure the practicality of solutions, and delivering impactful prototypes and production code. You will have the chance to experiment with various datasets, both in-house and third-party, to evaluate their relevance to business objectives. Moreover, your role will encompass building data transformations for structured and unstructured data, exploring and refining modeling and scoring algorithms, and implementing methods for adaptive learning and model validation. Your expertise in automation and predictive analytics will be instrumental in enhancing operational efficiency and performance monitoring. In addition to your technical skills, you will be expected to possess a strong academic background and exceptional software engineering capabilities. A proactive and detail-oriented approach, coupled with excellent collaboration skills, will be essential for success in this role. This is a hybrid position, allowing for a flexible work arrangement that combines remote work and office presence. The expectation is to work from the office 2-3 set days per week, with a general guideline of being in the office at least 50% of the time based on business requirements. Qualifications: - 8 or more years of work experience with a Bachelors Degree or an Advanced Degree - Proficiency in modeling techniques such as logistic regression, Nave Bayes, SVM, decision trees, or neural networks - Ability to program in scripting languages like Perl or Python, and programming languages such as Java, C++, or C# - Familiarity with statistical tools like SAS, R, KNIME, and experience with deep learning frameworks like TensorFlow - Knowledge of Natural Language Processing and working with large datasets using tools such as Hadoop, MapReduce, Pig, or Hive - Publications or presentations in recognized Machine Learning and Data Mining journals/conferences would be advantageous Join Visa as a Staff ML Scientist and contribute to pioneering advancements in Applied AI research that drive innovation and shape the future of data analytics.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Senior Programmer Analyst plays a crucial role in establishing and implementing new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will also be responsible for monitoring and controlling all phases of the development process, including analysis, design, construction, testing, and implementation, as well as providing user and operational support on applications to business users. Additionally, you will utilize your in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business and system processes, and industry standards, and make evaluative judgments. It will be essential for you to recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. Furthermore, you will consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems. You will also ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. You should be able to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. As an Applications Development Senior Programmer Analyst, you will be expected to assess risk appropriately when making business decisions, with particular consideration for the firm's reputation and the protection of Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policies, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency. Qualifications: - 6-10 years of relevant experience - Experience in systems analysis and programming of software applications - Experience in managing and implementing successful projects - Working knowledge of consulting/project management techniques/methods - Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: - Bachelor's degree/University degree or equivalent experience In addition to the general responsibilities and qualifications mentioned above, the ideal candidate should have: - Strong programming skills in Python - Proficiency in Object-Oriented Programming and Data Structures - Good knowledge of design patterns - Experience with Python frameworks such as Flask and Django - Strong technical skills in Big Data technologies like Pyspark and the Hadoop ecosystem components (HDFS, Hbase, Hive, Pig) - Strong experience in Pyspark - Solid understanding of REST web services - Familiarity with Spark performance tuning and optimization techniques - Knowledge of databases including PL SQL, SQL, and Transact-SQL, with Oracle being a plus - Experience in processing data in various file types such as flat files, XML, Parquet, CSV, and data frames - Good exposure to UI frameworks and the ability to understand UI architecture - Proficiency in source code management tools like Git - Experience in Agile methodology - Familiarity with issue tracking tools like Jira This job description provides a high-level overview of the responsibilities and qualifications for the Applications Development Senior Programmer Analyst role. Please note that other job-related duties may be assigned as required.,
Posted 1 week ago
5.0 - 7.0 years
27 - 37 Lacs
Bengaluru
Hybrid
One of our prestigious client is looking for to hire candidates for the below position Position Type : Sr. Data Scientist Years of Experience: 5 - 7 Years Number of positions: 2 nos Salary / CTC : upto 38 LPA (based on your current salary and on the interview evaluation) Job Summary Responsibilities: Help design and build the next iteration of process automation in Core Map processes employing a highly scalable Big Data infrastructure and machine learning as applied to global-scale digital map-making. Build and test analytic and statistical models to improve a wide variety of both internal data-driven processes for map-making data decisions and system control needs. Act as an expert and evangelist in the areas of data mining, machine learning, statistics, and predictive analysis and modeling. Requirements: MS or PhD in a discipline such as Statistics, Applied Mathematics, Computer Science, or Econometrics with an emphasis or thesis work on one or more of the following: computational statistics/science/engineering, data mining, machine learning, and optimization. Minimum of 5 years related, professional experience. Knowledge of data mining and analytic methods such as regression, classifiers, clustering, association rules, decision trees, Bayesian network analysis, etc. We should have expert-level knowledge in one or more of these areas. Knowledge of Computer Vision, Deep Learning and Point Cloud Processing algorithms. Proficiency with a statistical analysis package and associated scripting language such as Python, R, Matlab, SAS, etc. Programming experience with SQL, shell script, Python, etc. Knowledge of and ideally some experience with MLOps will be preferred. Knowledge of and ideally some experience with tools such as Pig, Hive, etc., for working with big data in Hadoop and/or Spark for data extraction and data prep for analysis. Experience with and demonstrated capability to effectively interact with both internal and external customer executives, technical and non-technical to explain uses and value of predictive systems and techniques. Demonstrated proficiency with understanding, specifying and explaining predictive modeling solutions and organizing teams of other data scientists and engineers to execute projects delivering those solutions. Preferred Qualifications: Development experience with Java and Scala Development experience with Docker Development experience with GIS data Development experience with NoSQL (i.e. DynamoDB) Knowledge of GPU programming (CUDA or OpenCL) on GPU accelerator architecture ================================================================= Please provide us the following information. along with your updated resume: Your Total Experience: Your Relevant Development Experience in Java: Your Relevant Development Experience in Scala programming: Your Relevant Development Experience with Docker: Your Relevant Development Experience with GIS data: Your Relevant Development Experience with NoSQL (i.e. DynamoDB): Your Relevant Experience with GPU programming (CUDA or OpenCL) on GPU accelerator architecture: Your latest Education with year of passing/percentage: Your Certification (any): If yes, Provide the your code or share the Certication copy: Your Notice Period: Is Buyout option available (yes/no): if yes, Do mention the Buyout notice period amount: Your Current Location: Your Preferred Location: Work from Office / Hybrid: Your Current Salary : Your Expected Salary : Any active offer (yes/no) : If yes, do mention your current offered salary details / offered company name : Your Preferred Interview Date / Time:
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Talend ETL Lead, you will be responsible for leading the design and development of scalable ETL pipelines using Talend, integrating with big data platforms, and mentoring junior developers. This is a high-impact, client-facing role requiring hands-on leadership and solution ownership. Lead the end-to-end development of ETL pipelines using Talend Data Fabric. Collaborate with data architects and business stakeholders to understand requirements. Build and optimize data ingestion, transformation, and loading processes. Ensure high performance, scalability, and reliability of data solutions. Mentor and guide junior developers in the team. Troubleshoot and resolve ETL-related issues quickly. Manage deployments and promote code through different environments. Qualifications: - 7+ years of experience in ETL/Data Engineering. - Strong hands-on experience with Talend Data Fabric. - Solid understanding of SQL, Hadoop ecosystem (HDFS, Hive, Pig, etc.). - Experience building robust data ingestion pipelines. - Excellent communication and leadership skills.,
Posted 2 weeks ago
4.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Big Data Lead with 7-12 years of experience, you will be responsible for leading the development of data processing systems and applications, specifically in the areas of Data Warehousing (DWH). Your role will involve utilizing your strong software development skills in multiple computing languages, with a focus on distributed data processing systems and BIDW programs. You should have a minimum of 4 years of software development experience and a proven track record in developing and testing applications, preferably on the J2EE stack. A sound understanding of best practices and concepts related to Data Warehouse Applications is crucial for this role. Additionally, you should possess a strong foundation in distributed systems and computing systems, with hands-on experience in Spark & Scala, Kafka, Hadoop, Hbase, Pig, and Hive. Experience with NoSQL data stores, data modeling, and data management will be beneficial for this role. Strong interpersonal communication skills are essential, along with excellent oral and written communication abilities. Knowledge of Data Lake implementation as an alternative to Data Warehousing is desirable. Hands-on experience with Spark SQL and SQL proficiency are mandatory requirements for this role. You should have a minimum of 2 end-to-end implementations in either Data Warehousing or Data Lake projects. Your role as a Big Data Lead will involve collaborating with cross-functional teams and driving data-related initiatives to meet business objectives effectively.,
Posted 2 weeks ago
5.0 - 10.0 years
22 - 27 Lacs
Pune, Bengaluru
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 1 month ago
5.0 - 10.0 years
22 - 27 Lacs
Chennai, Mumbai (All Areas)
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 1 month ago
6.0 - 8.0 years
8 - 12 Lacs
Chennai
Work from Office
As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Core Java, Spring Boot, Java 2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred technical and professional experience None
Posted 1 month ago
4.0 - 8.0 years
10 - 20 Lacs
Pune, Bengaluru
Work from Office
We are looking for skilled Hadoop and Google Cloud Platform (GCP) Engineers to join our dynamic team. If you have hands-on experience with Big Data technologies and cloud ecosystems, we want to hear from you! Key Skills: Hadoop Ecosystem (HDFS, MapReduce, YARN, Hive, Spark) Google Cloud Platform (BigQuery, DataProc, Cloud Composer) Data Ingestion & ETL pipelines Strong programming skills (Java, Python, Scala) Experience with real-time data processing (Kafka, Spark Streaming) Why Join Us? Work on cutting-edge Big Data projects Collaborate with a passionate and innovative team Opportunities for growth and learning Interested candidates, please share your updated resume or connect with us directly!
Posted 1 month ago
2.0 - 3.0 years
4 - 5 Lacs
Hyderabad
Work from Office
Duration: 12Months Job Type: Contract Work Type: Onsite Job Description : Analyzes business requirements/processes and system integration points to determine appropriate technology solutions. Designs, codes, tests and documents applications based on system and user requirements. Requirements: 2-4 years of relevant IT experience in Data-Warehousing Technologies with excellent communication and Analytical skills Should possess the below skillset experience Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should know Basic ~ Medium UNIX Commands Knowledge on Hadoop- HDFS, Hive, PIG and YARN Knowledge on ingestion tool - Stream sets Good to have knowledge on Spark and Kafka Exposure in scheduling tools like Control-M Excellent analytical and problem-solving skills is a must have Excellent communication skills (oral and written) Must be experienced in diverse industry and tools and data warehousing technologies. Responsibilities: Prepares flow charts and systems diagrams to assist in problem analysis. Responsible for preparing design documentation. Designs, codes, tests and debugs software according to the clients standards, policies and procedures. Codes, tests and documents programs according to system standards. Prepares test data for unit, string and parallel testing. Analyzes business needs and creates software solutions. Evaluates and recommends software and hardware solutions to meet user needs. Interacts with business users and I/T to define current and future application requirements. Executes schedules, costs and documentation to ensure project comes to successful conclusion. Initiates corrective action to stay on project schedules. May assist in orienting, training, assigning and checking the work of lower-level employees. Leads small to moderate budget projects. Knowledge and Skills: Possesses and applies a broad knowledge of application programming processes and procedures to the completion of complex assignments. Competent to analyze diverse and complex problems. Possesses and applies broad knowledge of principles of applications programming. Competent to work in most phases of applications programming. Beginning to lead small projects or starting to offer programming solutions at an advanced level. Knowledge includes advanced work on standard applications programs including coding, testing and debugging. Advanced ability to effectively troubleshoot program errors. Advanced understanding of how technology decisions relate to business needs. Mandatory Skills: Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should have good knowledge on UNIX Commands Experience: Total Exp 2-3 Years Rel Exp 2 years
Posted 2 months ago
5 - 10 years
15 - 30 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Skills: Mandatory: SQL, Python, Databricks, Spark / Pyspark. Good to have: MongoDB, Dataiku DSS, Databricks Exp in data processing using Python/scala Advanced working SQL knowledge, expertise using relational databases Need Early joiners. Required Candidate profile ETL development tools like databricks/airflow/snowflake. Expert in building and optimizing big data' data pipelines, architectures, and data sets. Proficient in Big data tools and ecosystem
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough