Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
As a Data Scientist at Mindcurv with 4+ years of experience, you will be responsible for delivering Data Science projects in the advanced analytics space. Your role involves driving business results through data-based insights, collaborating with stakeholders and functional/tech teams, and discovering solutions hidden in large datasets. You are expected to have experience working as a Data Scientist in the Marketing sector and ideally have delivered use cases for the Automotive industry. Your responsibilities will include identifying valuable data sources, supervising data preprocessing, analyzing information to discover trends, building machine learning models, and presenting insights using data visualization techniques. To excel in this role, you should have 3-5 years of experience in Analytics systems/program delivery with at least 2 implementations in Big Data or Advanced Analytics projects. Proficiency in Python is essential, and knowledge of R, Pyspark, and SQL is a plus. You should be familiar with various machine learning techniques, advanced statistical concepts, and have hands-on experience in GCP/AWS/Azure platforms. Additionally, you are expected to have experience with business intelligence tools like Tableau, ML frameworks, strong math skills, and excellent communication and presentation abilities. Your role will involve collaborating with Data engineering and product development teams to implement various AI algorithms and identify best-fit scenarios for business outcomes. Join Mindcurv to be part of a team of experts dedicated to redefining digital experiences and enabling sustainable business growth through data-driven solutions.,
Posted 4 days ago
5.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
You are an experienced Azure Databricks Engineer who will be responsible for designing, developing, and maintaining scalable data pipelines and supporting data infrastructure in an Azure cloud environment. Your key responsibilities will include designing ETL pipelines using Azure Databricks, building robust data architectures on Azure, collaborating with stakeholders to define data requirements, optimizing data pipelines for performance and reliability, implementing data transformations and cleansing processes, managing Databricks clusters, and leveraging Azure services for data orchestration and storage. You must possess 5-10 years of experience in data engineering or a related field with extensive hands-on experience in Azure Databricks and Apache Spark. Strong knowledge of Azure cloud services such as Azure Data Lake, Data Factory, Azure SQL, and Azure Synapse Analytics is required. Experience with Python, Scala, or SQL for data manipulation, ETL frameworks, Delta Lake, Parquet formats, Azure DevOps, CI/CD pipelines, big data architecture, and distributed systems is essential. Knowledge of data modeling, performance tuning, and optimization of big data solutions is expected, along with problem-solving skills and the ability to work in a collaborative environment. Preferred qualifications include experience with real-time data streaming tools, Azure certifications, machine learning frameworks, integration with Databricks, and data visualization tools like Power BI. A bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field is required for this role.,
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Genpact is a global professional services and solutions firm with over 125,000 employees in more than 30 countries. We are driven by curiosity, entrepreneurial agility, and the desire to create lasting value for our clients, including Fortune Global 500 companies. Our purpose is the relentless pursuit of a world that works better for people, and we serve leading enterprises with deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the role of Senior Principal Consultant, Research Data Scientist. The ideal candidate should have experience in Text Mining, Natural Language Processing (NLP) tools, Data sciences, Big Data, and algorithms. It is desirable to have full-cycle experience in at least one Large Scale Text Mining/NLP project, including creating a business use case, Text Analytics assessment/roadmap, Technology & Analytic Solutioning, Implementation, and Change Management. Experience in Hadoop, including development in the map-reduce framework, is also required. The Text Mining Scientist (TMS) will play a crucial role in bridging enterprise database teams and business/functional resources, translating business needs into techno-analytic problems and working with database teams to deliver large-scale text analytic solutions. The right candidate should have prior experience in developing text mining and NLP solutions using open-source tools. Responsibilities include developing transformative AI/ML solutions, managing project delivery, stakeholder/customer expectations, project documentation, project planning, and staying updated on industrial and academic developments in AI/ML with NLP/NLU applications. The role also involves conceptualizing, designing, building, and developing solution algorithms, interacting with clients to collect requirements, and conducting applied research on text analytics and machine learning projects. Qualifications we seek: Minimum Qualifications/Skills: - MS in Computer Science, Information systems, or Computer engineering - Systems Engineering experience with Text Mining/NLP tools, Data sciences, Big Data, and algorithms Technology: - Proficiency in Open Source Text Mining paradigms like NLTK, OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene, and cloud-based NLU tools such as DialogFlow, MS LUIS - Exposure to Statistical Toolkits like R, Weka, S-Plus, Matlab, SAS-Text Miner - Strong Core Java experience, Hadoop ecosystem, Python/R programming skills Methodology: - Solutioning & Consulting experience in verticals like BFSI, CPG - Solid foundation in AI Methodologies like ML, DL, NLP, Neural Networks - Understanding of NLP & Statistics concepts, applications like Sentiment Analysis, NLP, etc. Preferred Qualifications/Skills: Technology: - Expertise in NLP, NLU, Machine learning/Deep learning methods - UI development paradigms, Linux, Windows, GPU Experience, Spark, Scala - Deep learning frameworks like TensorFlow, Keras, Torch, Theano Methodology: - Social Network modeling paradigms - Text Analytics using NLP tools, Text analytics implementations This is a full-time position based in India-Noida. The candidate should have a Master's degree or equivalent education level. The job posting was on Oct 7, 2024, and the unposting date is ongoing. The primary skills required are digital, and the job category is full-time.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As an integral part of our Data Automation & Transformation team, you will experience unique challenges every day. We are looking for someone with a positive attitude, entrepreneurial spirit, and a willingness to dive in and get things done. This role is crucial to the team and will provide exposure to various aspects of managing a banking office. In this role, you will focus on building curated Data Products and modernizing data by moving it to SNOWFLAKE. Your responsibilities will include working with Cloud Databases such as AWS and SNOWFLAKE, along with coding languages like SQL, Python, and Pyspark. You will analyze data patterns across large multi-platform ecosystems and develop automation solutions, analytics frameworks, and data consumption architectures utilized by Decision Sciences, Product Strategy, Finance, Risk, and Modeling teams. Ideally, you should have a strong analytical and technical background in financial services, particularly in small business banking or commercial banking segments. Your key responsibilities will involve migrating Private Client Office Data to Public Cloud (AWS and Snowflake), collaborating closely with the Executive Director of Automation and Transformation on new projects, and partnering with various teams to support data analytics needs. You will also be responsible for developing data models, automating data assets, identifying technology gaps, and supporting data integration projects with external providers. To qualify for this role, you should have at least 3 years of experience in analytics, business intelligence, data warehousing, or data governance. A Master's or Bachelor's degree in a related field (e.g., Data Analytics, Computer Science, Math/Statistics, or Engineering) is preferred. You must have a solid understanding of programming languages such as SQL, SAS, Python, Spark, Java, or Scala, and experience in building relational data models across different technology platforms. Excellent communication, time management, and multitasking skills are essential for this role, along with experience in data visualization tools and compliance with regulatory standards. Knowledge of risk classification, internal controls, and commercial banking products and services is desirable. Preferred qualifications include experience with Big Data and Cloud platforms, data wrangling tools, dynamic reporting applications like Tableau, and proficiency in data architecture, data mining, and analytical methodologies. Familiarity with job scheduling workflows, code versioning software, and change management tools would be advantageous.,
Posted 4 days ago
1.0 - 4.0 years
1 - 5 Lacs
Bengaluru
Work from Office
About The Role We are looking for an agile, motivated, and dedicated personality who is ready to join our P&C Business Operations Team in [BANGALORE] as a Technical Accountant, As a Technical Accountant, you will be at the forefront of handling and analyzing financial data Your role involves proactive investigation and interpretation of accounting information, processing, balance settlement, and timely collection of funds, all while achieving target KPIs Your collaborative spirit will be essential as you work across teams and functions, addressing operational issues with internal and external clients You will own responsibility for an assigned portfolio of [EMEA] clients and collaborate with business partners internally and externally, Account Processing per established guidelines/processes Ensure data quality and perform technical verification of accounts and contract wordings Balance Settlement with External Clients according to reinsurance terms Timely collection of funds (Accounts Receivable), keeping track of financials within portfolio assigned Ensure financial transactions/payments are in adherence to processes and guidelines, quality management framework and key controls, Achieve target KPIs (Key Performance Indicators) Regular reporting to supervisor and internal stakeholders Sharing information with other team members and working cross functionally, as needed Data Quality control and risk management-related activities according to internal guidelines Contact internal business partners and external clients directly (either written or verbal) to resolve any pending operational issues like missing accounting information, incorrect data, payment delays, etc Provide administrative support to the team, including managing tasks and compiling reports for streamlined operations, About The Team You will join a very experienced and highly motivated Operations team handling Reinsurance portfolios for [EMEA] Our responsibilities require regular interactions with peers and experts from other locations We have a strong link to the other Operations teams and collaborate daily to deliver best service and most value to our clients With our continuous improvement mindset our aim is to provide our external clients and internal partners with fast, easy, and effective ways of conducting business within an environment where the risks are understood, About You Minimum Bachelor's in Administration, Actuarial Sciences, Finance, Accounting, Insurance or related fields Previous experience with Reinsurance is desirable Flair for figures and proficiency in Excel Power BI would be an added advantage, Good verbal and written business interpersonal skills Being an agile team player with the ability to manage own workload and demonstrate a sense of Accountability, Responsibility and Commitment, Ability to work together collaboratively, flexibly, and constructively in a team/group environment including in virtual set-ups Ability to challenge the status quo and manage multiple business partners (multi-cultural and cross functional teams in a virtual set-up), Experience with digital applications, automation, solutions and big data would be a plus, Insurance/Finance related certifications will be an added advantage, Advanced English About Swiss Re Swiss Re is one of the worlds leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime We cover both Property & Casualty and Life & Health Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients This is possible thanks to the collaboration of more than 14,000 employees across the world, Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability, If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience, Keywords Reference Code: 134259 Show
Posted 5 days ago
8.0 - 10.0 years
22 - 27 Lacs
Pune
Work from Office
AI/ML Engineer (Specializing in NLP/ML, Large Data Processing, and Generative AI) Job Summary Synechron seeks a highly skilled AI/ML Engineer specializing in Natural Language Processing (NLP), Large Language Models (LLMs), Foundation Models (FMs), and Generative AI (GenAI). The successful candidate will design, develop, and deploy advanced AI solutions, contributing to innovative projects that transform monolithic systems into scalable microservices integrated with leading cloud platforms such as Azure, Amazon Bedrock, and Google Gemini. This role plays a critical part in advancing Synechrons capabilities in cutting-edge AI technologies, enabling impactful business insights and product innovations. Software Requirements Required Proficiency: Python (core libraries: TensorFlow, PyTorch, Hugging Face transformers, etc.) Cloud platforms: Azure, AWS, Google Cloud (familiarity with AI/ML services) Containerization: Docker, Kubernetes Version control: Git Data management tools: SQL, NoSQL databases (e.g., MongoDB) Model deployment and MLOps tools: MLflow, CI/CD pipelines, monitoring tools Preferred Skills: Experience with cloud-native AI frameworks and SDKs Familiarity with AutoML tools Additional programming languages (e.g., Java, Scala) Overall Responsibilities Design, develop, and optimize NLP models, including advanced LLMs and Foundation Models, for diverse business use cases. Lead the development of large data pipelines for training, fine-tuning, and deploying models on big data platforms. Architect, implement, and maintain scalable AI solutions in line with MLOps best practices. Transition legacy monolithic AI systems into modular, microservices-based architectures for scalability and maintainability. Build end-to-end AI applications from scratch, including data ingestion, model training, deployment, and integration. Implement retrieval-augmented generation techniques for enhanced context understanding and response accuracy. Conduct thorough testing, validation, and debugging of AI/ML models and pipelines. Collaborate with cross-functional teams to embed AI capabilities into customer-facing and enterprise products. Support ongoing maintenance, monitoring, and scaling of deployed AI systems. Document system designs, workflows, and deployment procedures for compliance and knowledge sharing. Performance Outcomes: Production-ready AI solutions delivering high accuracy and efficiency. Robust data pipelines supporting training and inference at scale. Seamless integration of AI models with cloud infrastructure. Effective collaboration leading to innovative AI product deployment. Technical Skills (By Category) Programming Languages: Essential: Python (TensorFlow, PyTorch, Hugging Face, etc.) Preferred: Java, Scala Databases/Data Management: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, DynamoDB) Cloud Technologies: Azure AI, AWS SageMaker, Bedrock, Google Cloud Vertex AI, Gemini Frameworks and Libraries: Transformers, Keras, scikit-learn, XGBoost, Hugging Face engines Development Tools & Methodologies: Docker, Kubernetes, Git, CI/CD pipelines (Jenkins, Azure DevOps) Security & Compliance: Knowledge of data security standards and privacy policies (GDPR, HIPAA as applicable) Experience Requirements 8 to 10 years of hands-on experience in AI/ML development, especially NLP and Generative AI. Demonstrated expertise in designing, fine-tuning, and deploying LLMs, FMs, and GenAI solutions. Proven ability to develop end-to-end AI applications within cloud environments. Experience transforming monolithic architectures into scalable microservices. Strong background with big data processing pipelines. Prior experience working with cloud-native AI tools and frameworks. Industry experience in finance, healthcare, or technology sectors is advantageous. Alternative Experience: Candidates with extensive research or academic experience in AI/ML, especially in NLP and large-scale data processing, are eligible if they have practical deployment experience. Day-to-Day Activities Develop and optimize sophisticated NLP/GenAI models fulfilling business requirements. Lead data pipeline construction for training and inference workflows. Collaborate with data engineers, architects, and product teams to ensure scalable deployment. Conduct model testing, validation, and performance tuning. Implement and monitor model deployment pipelines, troubleshoot issues, and improve system robustness. Document models, pipelines, and deployment procedures for audit and knowledge sharing. Stay updated with emerging AI/ML trends, integrating best practices into projects. Present findings, progress updates, and technical guidance to stakeholders. Qualifications Bachelors degree in Computer Science, Data Science, or related field; Masters or PhD preferred. Certifications in AI/ML, Cloud (e.g., AWS, Azure, Google Cloud), or Data Engineering are a plus. Proven professional experience with advanced NLP and Generative AI solutions. Commitment to continuous learning to keep pace with rapidly evolving AI technologies. Professional Competencies Strong analytical and problem-solving capabilities. Excellent communication skills, capable of translating complex technical concepts. Collaborative team player with experience working across global teams. Adaptability to rapidly changing project scopes and emerging AI trends. Innovation-driven mindset with a focus on delivering impactful solutions. Time management skills to prioritize and manage multiple projects effectively.
Posted 5 days ago
4.0 - 7.0 years
6 - 9 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking a motivated and experienced Big Data Engineer to design, develop, and implement scalable big data solutions. The ideal candidate will possess strong hands-on experience with Hadoop, Spark, and NoSQL databases, enabling the organization to ingest, process, and analyze vast data sets efficiently. This role contributes directly to the organizations data-driven initiatives by creating reliable data pipelines and collaborating with cross-functional teams to deliver insights that support strategic decision-making and operational excellence. Purpose: To build and maintain optimized big data architectures that support real-time and batch data processing, enabling analytics, reporting, and machine learning efforts. Value: By ensuring high-performance and scalable data platforms, this role accelerates data insights, enhances business agility, and ensures data integrity and security. Software Requirements Required Skills: Deep expertise in Hadoop ecosystem components including Hadoop Distributed File System (HDFS), Spark (batch and streaming), and related tools. Practical experience with NoSQL databases such as Cassandra, MongoDB, and HBase. Experience with data ingestion tools like Spark Streaming and Apache Flume. Strong programming skills in Java, Scala, or Python. Familiarity with DevOps tools such as Git, Jenkins, Docker, and container orchestration with OpenShift or Kubernetes. Working knowledge of cloud platforms like AWS and Azure for deploying and managing data solutions. Preferred Skills: Knowledge of additional data ingestion and processing tools. Experience with data cataloging or governance frameworks. Overall Responsibilities Design, develop, and optimize large-scale data pipelines and data lakes using Spark, Hadoop, and related tools. Implement data ingestion, transformation, and storage solutions to meet business and analytic needs. Collaborate with data scientists, analysts, and cross-functional teams to translate requirements into technical architectures. Monitor daily data operations, troubleshoot issues, and improve system performance and scalability. Automate deployment and maintenance workflows utilizing DevOps practices and tools. Ensure data security, privacy, and compliance standards are upheld across all systems. Stay updated with emerging big data technologies to incorporate innovative solutions. Strategic objectives: Enable scalable, reliable, and efficient data processing platforms to support analytics and AI initiatives. Improve data quality, accessibility, and timeliness for organizational decision-making. Drive automation and continuous improvement in data infrastructure. Performance outcomes: High reliability and performance of data pipelines with minimal downtime. Increased data ingestion and processing efficiency. Strong collaboration across teams leading to successful project outcomes. Technical Skills (By Category) Programming Languages: Essential: Java, Scala, or Python for developing data pipelines and processing scripts. Preferred: Knowledge of additional languages such as R or SQL scripting for data manipulation. Databases & Data Management: Experience with Hadoop HDFS, HBase, Cassandra, MongoDB, and similar NoSQL data stores. Familiarity with data modeling, ETL workflows, and data warehousing strategies. Cloud Technologies: Practical experience deploying and managing big data solutions on AWS (e.g., EMR, S3) and Azure. Knowledge of cloud security practices and resource management. Frameworks & Libraries: Extensive use of Hadoop, Spark (structured and streaming), and related libraries. Familiarity with serialization formats like Parquet, Avro, or ORC. Development Tools & Methodologies: Proficiency with GIT, Jenkins, Docker, and OpenShift/Kubernetes for versioning, CI/CD, and containerization. Experience working within Agile/Scrum environments. Security & Data Governance: Comprehension of data security protocols, access controls, and compliance regulations. Experience Requirements 4 to 7 years of hands-on experience in Big Data engineering or related roles. Demonstrable experience designing and maintaining large-scale data pipelines, data lakes, and data warehouses. Proven aptitude for using Spark, Hadoop, and NoSQL databases effectively in production environments. Prior experience in financial services, healthcare, retail, or telecommunications sectors is a plus. Ability to lead technical initiatives and collaborate with multidisciplinary teams. Day-to-Day Activities Develop and optimize data ingestion, processing, and storage workflows. Collaborate with data scientists and analysts to architect solutions aligned with business needs. Build, test, and deploy scalable data pipelines ensuring high performance and reliability. Monitor system health, diagnose issues, and implement improvements for data systems. Conduct code reviews and knowledge sharing sessions within the team. Participate in sprint planning, daily stand-ups, and project reviews to ensure timely delivery. Stay current with evolving big data tools and best practices. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related field. Relevant certifications in big data technologies or cloud platforms are a plus. Demonstrable experience leading end-to-end data pipeline solutions. Professional Competencies Strong analytical, troubleshooting, and problem-solving skills. Effective communicator with the ability to explain complex concepts to diverse audiences. Ability to work collaboratively in a team-oriented environment. Adaptability to emerging technologies and shifting priorities. High level of organization and attention to detail. Drive for continuous learning and process improvement.
Posted 5 days ago
10.0 - 15.0 years
25 - 30 Lacs
Nagpur, Pune
Work from Office
Design and manage scalable, secure, and high-performance database systems aligned with business goals. Optimize performance, ensure data integrity, and implement modern data solutions. Lead cross-functional collaboration.
Posted 5 days ago
10.0 - 15.0 years
11 - 16 Lacs
Bengaluru
Work from Office
Role Overview: Skyhigh Security is seeking a Principal Data Engineer to design and build scalable Big Data solutions. You'll leverage your deep expertise in Java and Big Data architecture to process massive datasets and shape our security offerings. Extensive experience with distributed systems, cloud platforms, and a passion for data quality, apply now to join our innovative team and make a global impact in cybersecurity! Our Engineering team is driving the future of cloud securitydeveloping one of the worlds largest, most resilient cloud-native data platforms. At Skyhigh Security, were enabling enterprises to protect their data with deep intelligence and dynamic enforcement across hybrid and multi-cloud environments. As we continue to grow, were looking for a Principal Data Engineer to help us scale our platform, integrate advanced AI/ML workflows, and lead the evolution of our secure data infrastructure. Responsibilities: As a Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (e.g., anomaly detection, metadata enrichment, automation). Evaluate and integrate LLM-based automation and AI-enhanced observability into engineering workflows. Ensure data security and privacy compliance. Mentoring engineers, ensuring high engineering standards, and promoting technical excellence across teams. What Were Looking For (Minimum Qualifications) 10+ years of experience in big data architecture and engineering, including deep proficiency with the AWS cloud platform. Expertise in distributed systems and frameworks such as Apache Spark, Scala, Kafka, Flink, and Elasticsearch, with experience building production-grade data pipelines. Strong programming skills in Java for building scalable data applications. Hands-on experience with ETL tools and orchestration systems. Solid understanding of data modeling across both relational (PostgreSQL, MySQL) and NoSQL (HBase) databases and performance tuning. What Will Make You Stand Out (Preferred Qualifications) Experience integrating AI/ML or LLM frameworks (e.g., LangChain, LlamaIndex) into data workflows. Experience implementing CI/CD pipelines with Kubernetes, Docker, and Terraform. Knowledge of modern data warehousing (e.g., BigQuery, Snowflake) and data governance principles (GDPR, HIPAA). Strong ability to translate business goals into technical architecture and mentor teams through delivery. Familiarity with visualization tools (Tableau, Power BI) to communicate data insights, even if not a primary responsibility. #LI-MS1
Posted 5 days ago
2.0 - 7.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. General Summary Preferred Qualifications 3+ years of experience as a Data Engineer or in a similar role Experience with data modeling, data warehousing, and building ETL pipelines Solid working experience with Python, AWS analytical technologies and related resources (Glue, Athena, QuickSight, SageMaker, etc.,) Experience with Big Data tools , platforms and architecture with solid working experience with SQL Experience working in a very large data warehousing environment, Distributed System. Solid understanding on various data exchange formats and complexities Industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Strong data visualization skills Basic understanding of Machine Learning; Prior experience in ML Engineering a plus Ability to manage on-premises data and make it inter-operate with AWS based pipelines Ability to interface with Wireless Systems/SW engineers and understand the Wireless ML domain; Prior experience in Wireless (5G) domain a plus Education Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline Preferred QualificationsMasters in CS/ECE with a Data Science / ML Specialization Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field OR PhD in Engineering, Information Systems, Computer Science, or related field. 3+ years of experience with Programming Language such as C, C++, Java, Python, etc. Develops, creates, and modifies general computer applications software or specialized utility programs. Analyzes user needs and develops software solutions. Designs software or customizes software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. Modifies existing software to correct errors, allow it to adapt to new hardware, or to improve its performance. Analyzes user needs and software requirements to determine feasibility of design within time and cost constraints. Confers with systems analysts, engineers, programmers and others to design system and to obtain information on project limitations and capabilities, performance requirements and interfaces. Stores, retrieves, and manipulates data for analysis of system capabilities and requirements. Designs, develops, and modifies software systems, using scientific analysis and mathematical models to predict and measure outcome and consequences of design. Principal Duties and Responsibilities: Completes assigned coding tasks to specifications on time without significant errors or bugs. Adapts to changes and setbacks in order to manage pressure and meet deadlines. Collaborates with others inside project team to accomplish project objectives. Communicates with project lead to provide status and information about impending obstacles. Quickly resolves complex software issues and bugs. Gathers, integrates, and interprets information specific to a module or sub-block of code from a variety of sources in order to troubleshoot issues and find solutions. Seeks others' opinions and shares own opinions with others about ways in which a problem can be addressed differently. Participates in technical conversations with tech leads/managers. Anticipates and communicates issues with project team to maintain open communication. Makes decisions based on incomplete or changing specifications and obtains adequate resources needed to complete assigned tasks. Prioritizes project deadlines and deliverables with minimal supervision. Resolves straightforward technical issues and escalates more complex technical issues to an appropriate party (e.g., project lead, colleagues). Writes readable code for large features or significant bug fixes to support collaboration with other engineers. Determines which work tasks are most important for self and junior engineers, stays focused, and deals with setbacks in a timely manner. Unit tests own code to verify the stability and functionality of a feature. Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 5 days ago
6.0 - 10.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Location: Bangalore, India Thales people architect solutions at the heart of the defence-security continuum. Interoperable and secure information and telecommunications systems for defence, security, and civil operators, are based upon innovative use of radiocommunications, networks, and cybersecurity. We are ground breaking new digital technologies such as 4G mobile communications, cryptography, cloud computing and big data for use in physical protection systems, and critical information systems. Job Responsibilities: Based on technical specifications, you develop functional tests on product, using the services Python framework and Nation Instrument Teststand software in C language. The software developments, , are used to control RF instruments and RF switching drawers or to establish communication links with the products under tests. Responsible for one or more software development and work as part of a team of the software development and validation engineers based in India and in France. You also contribute to improving competitiveness by proposing optimizations to our software. Experience level- 6yrs-10yrs Skills: Experience in software development with language C, Python Knowledge of instrumentation and instrument drivers refers to understanding how to operate and control measurement devices ( use of measuring equipment such as oscilloscopes, network analyzers, spectrum analyzers, signal generators, multimeters, etc.) Skills in integration, Radio Frequency, digital signal processing, radio environments.
Posted 5 days ago
3.0 - 8.0 years
5 - 9 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 3+ years of working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on working experience in Azure Databricks Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both verbal and written) Preferred Qualification: Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Gen #NJP
Posted 5 days ago
6.0 - 11.0 years
30 - 35 Lacs
Pune
Work from Office
About The Role : Job Title - Full Stack Developer (Mainly DB , ETL experience) Location - Pune, India Role Description Currently DWS sources technology infrastructure, corporate functions systems [Finance, Risk, Legal, Compliance, AFC, Audit, Corporate Services etc.] and other key services from DB. Project Proteus aims to strategically transform DWS to an Asset Management standalone operating platform; an ambitious and ground-breaking project that delivers separated DWS infrastructure and Corporate Functions in the cloud with essential new capabilities, further enhancing DWS highly competitive and agile Asset Management capability. This role offers a unique opportunity to be part of a high performing team implementing a strategic future state technology landscape for all of DWS Corporate Functions globally. We are seeking a highly skilled and experienced ETL developer with Informatica tool experience along with strong development experience on various RDMS along with exposure to cloud-based platforms. The ideal candidate will be responsible for designing, developing, and implementing robust and scalable custom solutions, extensions, and integrations in a cloud-first environment. This role requires a deep understanding of data migration, system integration, and optimization, cloud-native development principles, and the ability to work collaboratively with functional teams and business stakeholders. The role needs to provide support to the US business stakeholders & regulatory reporting processes during morning US hours. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities This role will be primarily responsible for creating good quality software designs and hence strong sense of software design principles is required Will get involved with hands-on code development Thorough testing of developed software Mentor junior team members in both technical and functional front Do code review of other team members Participate and manage daily stand-up meetings Articulate issues and risks to management in timely manner This role will require 80% Technical involvement and 20% on other activities like team handling, mentoring, status reporting, year-end appraisals Analyse software defects and fix them in timely manner Work closely with Stakeholders and other teams like Functional Analysis and Quality Assurance teams. Supports testing on behalf of users, operations, and testing teams potentially including test plans, test cases, test-data and review of interface testing, between different applications, when required. Work with application developers to resolve functional issues from UATs, and to help find solutions for various functional difficulty areas. Works closely with business analysts detail proposed solutions and solution maintenance. Work with Application Management area for functional area trouble shooting and resolution to reported bugs / issues on applications. Your skills and experience Bachelors Degree from an accredited college or university with a concentration in Science or an IT-related discipline (or equivalent) Should be hands on in technology Minimum 10 years of IT industry experience Proficient in Informatica or any ETL tool Hands-on experience on Oracle SQL/PL SQL Exposure to PostgreSQL Exposure to Cloud / Big Data technology CI/CD (Team City or Jenkins) GitHub usage Basic commands on UNIX Exposure to Control-M scheduling tool Worked in Agile/Scrum software development environment. High analytical capabilities Proven communication skills Must be an effective problem solver Able to Multi-task and work under tight deadlines Identifying and escalating problems at an early stage Flexibility and willingness to work autonomously Self-motivated within set competencies in a team and fast paced environments High degree of accuracy and attention to detail Nice to have Any exposure to PySpark will be Plus Any exposure to React JS or Angular JS will be plus Architecting and automating the build process for production, using scripts Worked in Agile/Scrum software development atmosphere. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We at DWS are committed to creating a diverse and inclusive workplace, one that embraces dialogue and diverse views, and treats everyone fairly to drive a high-performance culture. The value we create for our clients and investors is based on our ability to bring together various perspectives from all over the world and from different backgrounds. It is our experience that teams perform better and deliver improved outcomes when they are able to incorporate a wide range of perspectives. We call this #ConnectingTheDots.
Posted 5 days ago
3.0 - 7.0 years
13 - 18 Lacs
Pune
Work from Office
About The Role : Job Title Technical-Specialist Big Data (PySpark) Developer LocationPune, India Role Description This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Python and Spark technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. What well offer you . 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Big Data platform for at least 5 years. Hands own experience in Spark (Hive, Impala). Hands own experience in Python Programming language. Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platformsOpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members. How well support you . . . .
Posted 5 days ago
8.0 - 12.0 years
30 - 35 Lacs
Pune
Work from Office
About The Role : Job TitleSenior Engineer PD, AVP LocationPune, India Role Description Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. Within TDI, Partner data is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via >2k interfaces. From a technical perspective, we focus on mainframe but also build solutions on premise cloud, restful services, and an angular frontend. Next to the maintenance and the implementation of new CTB requirements, the content focus also lies on the regulatory and tax topics surrounding a partner/ client. We are looking for a very motivated candidate for the Cloud Data Engineer area. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark, Dataproc, Dataflow, BigQuery, Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (BigQuery, Cloud SQl, No Sql, Hive etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at leastSpark, Java ,Scala and Python, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 5 days ago
8.0 - 13.0 years
32 - 37 Lacs
Bengaluru
Work from Office
About The Role : Job TitleData Modeler, VP LocationBangalore, India Corporate TitleVP Role Description A Passion to Perform. Its what drives us. More than a claim, this describes the way we do business. Were committed to being the best financial services provider in the world, balancing passion with precision to deliver superior solutions for our clients. This is made possible by our peopleagile minds, able to see beyond the obvious and act effectively in an ever-changing global business landscape. As youll discover, our culture supports this. Diverse, international and shaped by a variety of different perspectives, were driven by a shared sense of purpose. At every level agile thinking is nurtured. And at every level agile minds are rewarded with competitive pay, support and opportunities to excel. The Office of the CSO - Data Enablement Tribe brings together the Business, Technology and Operational pillars of the Bank to provide information security services to Deutsche Bank. We are responsible for developing, implementing, maintaining and protecting the entire IT and operations infrastructure required to support all of the Bank's businesses. Overview Data Enablement is responsible for delivering a comprehensive near-time reporting platform covering all CSO controls. The reporting will provide business intelligence for the security posture of all banking applications and portfolios enabling improved risk management practices. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Produce and maintain a large and complex cloud data warehouse data model according to recognised best practice and standards. Capture complex requirements from stakeholders and undertake detailed analysis. Design solutions in terms of physical and logical data models. Communicate data model designs to the ETL and BI development teams and respond to feedback. You will have Thorough knowledge of BI and cloud data warehouse data modelling best practice. Strong knowledge of relational and dimensional data models and experience of data modelling ideally in a cloud environment. Ability to solve complex problems. Experience of working in agile delivery environments. Awareness of data warehouse architectures and data management best practices. Awareness of ETL, Database, Big Data and BI presentation layer technologies. Experience in using Big Query and SAP Power Designer or Sparxx EA. Experience with requirements gathering and documentation using a structured approach. Ability to write SQL and undertake detailed analysis. Experience of working with globally distributed teams. Excellent communication skills. Some understanding of information security and risk is desirable. You will be Able to work in a fast-paced environment Able to deal with sudden change in priorities Open minded and able to share information Able to prioritise effectively Able to work with minimal supervision Your skills and experience SenioritySenior (5+ Years) Competencies o Must Have SQL Data Warehouse Data Modelling o Nice to Have Cloud especially Google Cloud Data Analysis Information Security Financial Services / Cyber Security SAP Business Objects & SAP Power Designer or Sparxx EA How well support you
Posted 5 days ago
3.0 - 8.0 years
10 - 14 Lacs
Noida, Hyderabad
Work from Office
Position Overview: We are seeking a highly skilled and tactically focused Technical Product Owner to play a critical role in supporting our data engineering initiatives, particularly in the ETL, big data, and integrations space. This role is critical to optimizing the value of the development teams work, managing the backlog, and ensuring seamless execution and delivery of enterprise cross-functional initiatives. In this role, you will collaborate with cross-functional teams, manage the product backlog, and ensure smooth execution of data engineering initiatives. Job Responsibilities: Own and manage the product backlog for data engineering projects. Translate product and technical requirements into user stories, prioritize and groom the product backlog, and align with development on priorities. Ensure backlog items are visible, clearly defined, and prioritized. Help define and manage metadata, data lineage, and data quality. Collaborate with IT data owners and business leaders to ensure consistent and reliable data definitions. Coordinate with on and offshore development teams, engineering, design, and business teams for seamless execution of data engineering initiatives. Document processes and train business stakeholders on platform features. Communicate effectively across technical and non-technical stakeholders. Basic Qualifications: 3+ years in a Product Owner, Business Systems Analyst, or similar role in data engineering. Experience with Snowflake, BigQuery, or similar platforms. Proficiency in Jira and Confluence; strong knowledge of Agile Scrum methodologies and experience working within an Agile framework. Strong analytical skills with experience in collecting, analyzing, and interpreting data; comfort with A/B testing and KPIs. Excellent verbal and written communication skills; experience translating complex requirements into clear, actionable items. Proven ability to work effectively with cross-functional teams, including offshore development, engineering, UX, and business stakeholders. Ability to troubleshoot issues, proactively improve processes, and ensure smooth platform operations
Posted 5 days ago
6.0 - 10.0 years
14 - 19 Lacs
Pune
Work from Office
Project Role : Business and Integration Architect Project Role Description : Designs the integration strategy endpoints and data flow to align technology with business strategy and goals. Understands the entire project life-cycle, including requirements analysis, coding, testing, deployment, and operations to ensure successful integration. Must have skills : Generative AI Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Business and Integration Architect, you will be responsible for designing the integration strategy endpoints and data flow to align technology with business strategy and goals. Your typical day will involve collaborating with various teams to understand project requirements, analyzing data flows, and ensuring that the integration processes are efficient and effective. You will engage in discussions to refine strategies and provide insights that drive the project lifecycle from requirements analysis to deployment and operations, ensuring that all aspects of the integration align with the overarching business objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate workshops and meetings to gather requirements and align team objectives.- Mentor junior professionals to enhance their skills and understanding of integration strategies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Generative AI.- Strong understanding of integration frameworks and methodologies.- Experience with data modeling and architecture design.- Ability to analyze complex business requirements and translate them into technical solutions.- Familiarity with cloud technologies and their integration with on-premise systems. Additional Information:- The candidate should have minimum 5 years of experience in Generative AI.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
4.0 - 9.0 years
7 - 11 Lacs
Coimbatore
Work from Office
We are looking for a highly skilled AI Implementation Engineer with a focus on Python to join our team at Techjays. The ideal candidate will have 4 to 9 years of experience in the field. Roles and Responsibility Design, develop, and implement AI models using Python. Collaborate with cross-functional teams to identify business problems and develop solutions. Develop and maintain large-scale data pipelines and architectures. Work closely with data scientists to integrate AI models into existing systems. Troubleshoot and optimize AI model performance issues. Stay up-to-date with industry trends and advancements in AI. Job Requirements Strong proficiency in Python programming language. Experience with AI frameworks such as TensorFlow or PyTorch. Knowledge of machine learning algorithms and deep learning techniques. Familiarity with cloud platforms such as AWS or Google Cloud. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment.
Posted 5 days ago
6.0 - 11.0 years
7 - 11 Lacs
Nagercoil
Work from Office
We are looking for a skilled Python Developer with expertise in Machine Learning to join our team at Panacorp Software Solutions. The ideal candidate will have 6 years of experience and a strong background in developing scalable and efficient machine learning models. Roles and Responsibility Design, develop, and deploy machine learning models using Python. Collaborate with cross-functional teams to identify business problems and develop solutions. Develop and maintain large-scale data pipelines and architectures. Implement automated testing and deployment scripts. Troubleshoot and resolve technical issues related to machine learning models. Stay updated with industry trends and advancements in machine learning. Job Requirements Strong proficiency in Python programming language. Experience with machine learning frameworks such as TensorFlow or PyTorch. Knowledge of deep learning techniques and their applications. Familiarity with cloud platforms such as AWS or Google Cloud. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment.
Posted 5 days ago
2.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture strategy. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies.- Strong understanding of data integration processes and tools.- Experience with data warehousing concepts and practices.- Familiarity with ETL processes and data pipeline development.- Ability to work with various database management systems. Additional Information:- The candidate should have minimum 7.5 years of experience in Data Modeling Techniques and Methodologies.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
12.0 - 15.0 years
18 - 22 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Architect, you will be responsible for architecting the data platform blueprint and implementing the design, which includes various relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to align on project goals and ensure that the architecture meets the needs of the organization, all while maintaining a focus on best practices and innovative solutions. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and adjust strategies as necessary to meet objectives. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data architecture principles and best practices.- Experience with cloud-based data solutions and services.- Ability to design and implement data integration strategies.- Familiarity with data governance and compliance standards. Additional Information:- The candidate should have minimum 12 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Warehouse ETL Testing Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing solutions, and ensuring that applications function seamlessly to support organizational goals. You will also participate in testing and validation processes to guarantee that the applications meet the required standards and specifications, contributing to the overall success of the projects you are involved in. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application requirements and specifications.- Engage in continuous learning to stay updated with industry trends and technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Warehouse ETL Testing.- Strong understanding of data integration processes and methodologies.- Experience with various ETL tools and frameworks.- Ability to perform data validation and quality assurance checks.- Familiarity with database management systems and SQL. Additional Information:- The candidate should have minimum 3 years of experience in Data Warehouse ETL Testing.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
4.0 - 9.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that the applications developed meet both user needs and technical requirements. Your role will be pivotal in fostering a collaborative environment that encourages innovation and problem-solving among team members. Roles & Responsibilities:Minimum of 4 years of experience in data engineering or similar roles.Proven expertise with Databricks and data processing frameworks.Technical Skills SQL, Spark, Py spark, Databricks, Python, Scala, Spark SQLStrong understanding of data warehousing, ETL processes, and data pipeline design.Experience with SQL, Python, and Spark.Excellent problem-solving and analytical skills.Effective communication and teamwork abilities. Professional & Technical Skills: Experience and knowledge of Azure SQL Database, Azure Data Factory, ADLS Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Data Services.- This position is based in Pune.- A 15 year full time education is required. Qualification 15 years full time education
Posted 5 days ago
5.0 - 8.0 years
10 - 14 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary As an Application Lead, you will be responsible for designing, building, and configuring applications. Acting as the primary point of contact, you will lead the development team, oversee the delivery process, and ensure successful project execution. Roles & ResponsibilitiesAct as a Subject Matter Expert (SME) in application developmentLead and manage a development team to achieve performance goalsMake key technical and architectural decisionsCollaborate with cross-functional teams and stakeholdersProvide technical solutions to complex problems across multiple teamsOversee the complete application development lifecycleGather and analyze requirements in coordination with stakeholdersEnsure timely and high-quality delivery of projects Professional & Technical SkillsMust-Have SkillsProficiency in Apache SparkStrong understanding of big data processingExperience with data streaming technologiesHands-on experience in building scalable, high-performance applicationsKnowledge of cloud computing platformsMust-Have Additional SkillsPySparkSpark SQL / SQLAWS Additional InformationThis is a full-time, on-site role based in GurugramCandidates must have a minimum of 5 years of hands-on experience with Apache SparkA minimum of 15 years of full-time formal education is mandatory Qualification 15 years full time education
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough